You are on page 1of 133

(IJCNS) International Journal of Computer and Network Security, 1

Vol. 1, No. 2, November 2009

The Stone Cipher-192 (SC-192): A Metamorphic


Cipher
Magdy Saeb

Computer Engineering Department,


Arab Academy for Science, Tech. & Maritime Transport
Alexandria, EGYPT
(On-Leave), Malaysian Institute of Microelectronic Systems MIMOS
Kuala Lumpur, MALAYSIA
mail@magdysaeb.net

Abstract: The Stone Cipher-192 is a metamorphic cipher that user key. The key stream is used to select the operation; thus
utilizes a variable word size and variable-size user’s key. In the providing a random however recoverable sequence of such
preprocessing stage, the user key is extended into a larger table or operations. A bit-balanced operation provides an output that
bit-level S-box using a specially developed one-way function. has the same number of ones and zeroes. These operations
However for added security, the user key is first encrypted using are XOR, INV, ROR and NOP. Respectively, these are,
the cipher encryption function with agreed-upon initial values. The
xoring a key bit with a plaintext bit, inverting a plaintext
generated table is used in a special configuration to considerably
bit, exchanging one plaintext bit with another one in a given
increase the substitution addressing space. Accordingly, we call
this table the S-orb. Four bit-balanced operations are pseudo- plaintext word using a rotation right operation and
randomly selected to generate the sequence of operations producing the plaintext bit without any change. In fact,
constituting the cipher. These operations are: XOR, INV, ROR, these four operations are the only bit-balanced logic
NOP for bitwise xor, invert, rotate right and no operation operations. In the next few sections, we discuss the design
respectively. The resulting key stream is used to generate the bits rationale, the structure of the cipher, the one-way function
required to select these operations. We show that the proposed employed to generate the sub-keys, the software and
cipher furnishes concepts of key-dependent pseudo random hardware implementations of the cipher, a comparison with
sequence of operations that even the cipher designer cannot a polymorphic cipher and a discussion of its security against
predict in advance. In this approach, the sub-keys act as program known and some probable cryptanalysis attacks. Finally, we
instructions not merely as a data source. Moreover, the provide a summary of results and our conclusions.
parameters used to generate the different S-orb words are likewise
key-dependent. We establish that the self-modifying proposed
cipher, based on the aforementioned key-dependencies, provides 2. Design Rationale
an algorithm metamorphism and adequate security with a simple It is a long-familiar fact that all ciphers, including block and
parallelizable structure. The ideas incorporated in the
development of this cipher may pave the way for key-driven stream ciphers, are emulating a one-time pad OTP.
encryption rather than merely using the key for sub-key However, for provable security, the key bits have to be used
generation. The cipher is adaptable to both hardware and only once for each encrypted plaintext bit. Obviously, with
software implementations. Potential applications include voice
present day technology this is not a practical solution.
and image encryption.
Alternatively, one resorts to computational complexity
Keywords: metamorphic, polymorphic, cipher, cryptography, security. In this case, the key bits will be used more than
filters, hash. once. Unfortunately, this will provide the cipher
cryptanalyst with the means to launch feasible statistical
1. Introduction attacks. To overcome these known attacks, we propose an
A metamorphic reaction takes place in a rock when various improvement in the nonlinearity-associated filtering of the
minerals go from amphibolites facies to some color schist plaintext bits. This can be achieved in various ways as
facies. Some of the minerals such as quartz may not take shown in [1]; however, the process can be further simplified
place in this reaction. The process in its essence follows and become appreciably faster and more riotously-secure if
certain rules; however the end result provides a pseudo
we parallelize all operations employed. We will establish
random distribution of the minerals in the rock or stone.
The metamorphic natural process results in thousands or that the proposed configuration can be further parallelized
even millions of different shapes of the rock or stone. This to enormously improve its security and throughput. One can
process has inspired us to design and implement a new imagine the algorithm as a pseudo random sequence of
metamorphic cipher that we call “Stone Cipher-192”. The operations that are totally key-dependent. Accordingly, we
internal sub-keys are generated using a combination of the presuppose that most known attacks will be very difficult to
encryption function itself and a 192-bit specially-designed
launch since there are no statistical clues left to the attacker.
one-way function. The idea of this cipher is to use four low
level operations that are all bit-balanced to encrypt the The algorithm utilized is randomly selected. Even the cipher
plaintext bit stream based on the expanded stream of the designer has no clear idea what is the sequence of bitwise
2 (IJCNS) International Journal of Computer and Network Security,
Vol. 1, No. 2, November 2009

operations would be. The encryption low-level operations


are selected to be bit-balanced. That is, they do not provide The basic crypto logic unit (CLU) is shown in Figure 2. All
any bias to the number of zeroes or ones in the output operations are at the bit level. The unit is to be repeated a
number of times depending on the required word or block
cipher. The result of such an approach will be the creation
size. The rotation operation, referred to by the circular
of an immense number of wrong messages that conceal the arrow, is performed using multiplexers as shown in Figure
only correct one. Therefore, the cryptanalyst is left with the 3. In the software version these multiplexers are replaced by
sole option of attacking the key itself. However, if the sub- “case” or “switch” statement. This CLU is used as the
keys are generated based on a cascade of the same encryptor or the decryptor. This can be easily verified, if we
encryption function and a one-way hash, then we conceive investigate the truth table shown in Appendix A. In this
table, if we change the output cipher bit to become an input
that these attacks will be unmanageable to launch. We are
plain text bit, the new output will be the same as the old
producing an unexampled key-dependent encryption plain text bit. Obviously, this is a feature of the applied
algorithm. In this case, the least high-priced kept secret is functions namely XOR, INV or NOP. The only exception is
the key. The proposed system is malleable and resilient if in the case of ROR, the decryptor will use ROL.
Pi Ki
unknowingly disclosed. This theme does not dispute
XOR
AND3

Kerckhoffs' principle [2] or Shannon’s maxim since the inst5


7404

inst14
7404 inst8

“enemy knows the system”. However, it provides a degree of inst16

7404 AND3
7404

security against statistical attacks [3] that, we believe, inst12


inst17
inst9
OR4

cannot be attained with conventional ciphers [4], [5], [6],


Ci
AND3

[7], [8],[9]. 7404

inst18 inst10
inst

AND3

inst11

3. The Structure of the Cipher


S1
The conceptual block diagram of the proposed cipher is S0

shown below in Figure 1. It is constructed of two basic Figure 2. The basic crypto logic unit
functions; the encryption function and the sub-key
generation one-way hash function. The pseudo random
number generator is built using the same encryption
function and the one-way hash function in cascade. Two
large numbers (a, b) are used to iteratively generate the sub-
keys. The details of the substitution box or what we call the
S-orb can be found in [1]. The user key is first encrypted
then the encrypted key is used to generate the sub-keys.

Figure 3. The rotation operation (ROTR) implementation


using multiplexers
The operation selection bits (S1 S0) can be chosen from any
two sub-key consecutive bits; as shown in Figure 4. The
same applies for the rotation selection bits (S’1 S’0).

Figure 1. The structure of the cipher

The encryption function or the cipher engine is built using


four low-level operations. These are XOR, INV, ROR and
NOP. Table 1 demonstrates the details of each one of these
operations.
Figure 4. The proposed key format where the location of the
Table 1: The basic cipher engine (encryption function)
operations selection bits is shown
Mnemonic Operation Select Operation
code
XOR Ci = Ki Pi 00 4. The One-way Hash Function
INV Ci = ¬(Pi) 01 Cryptographic one-way functions or message digest have
ROR Pi ← Pj 10 numerous applications in data security. The recent crypto-
NOP Ci = Pi 11 analysis attacks on existing hash functions have provided
(IJCNS) International Journal of Computer and Network Security, 3
Vol. 1, No. 2, November 2009

the motivation for improving the structure of such functions.


The design of the proposed hash is based on the principles Theorem 5.1:
provided by Merkle’s work [10], Rivest MD-5 [11], SHA-1 Let h be an m-bit to n-bit hash function where m >= n input
and RIPEMD [12]. However, a large number of keys k1, k2 to h.
modifications and improvements are implemented to enable Then h (k1) = h (k2) with probability equal to:
this hash to resist present and some probable future crypto- 2-m + 2-n – 2-m-n
analysis attacks. The procedure, shown in Figure 5, Proof:
provides a 192-bit long hash [13] that utilizes six variables If k1 = k2 , then h (k1) = h (k2).
for the round function. However, if k1≠ k2, then h(k1) = h(k2) with probability 2-n.
k1 = k2 with probability 2-m and k1≠ k2 with probability 1- 2-
m
.
Then the probability that h (k1) = h(k2) is given by:
Pr {h (k1) = h (k2)} = 2-m + (1 - 2-m). 2-n
As an example, assume two 192-bit different keys x1, x2
then
Pr {h(x1) = h(x2)} = 2. 2-192 – 2-384
= 2-191 (1 - 2-193) ≈ 3.186 x 10-58
This is a negligible probability of collision of two different
keys.

5. The Pseudo Random Number Generator


(PRG)

Figure 5. Operation of MDP-192 one-way function [13] The combination of the encryption function and the one-way
hash function is used to generate the sub-keys. The cipher
A 1024-bit block size, with cascaded xor operations and
designer has to select which one should precede the other.
deliberate asymmetry in the design structure, is used to
provide higher security with negligible increase in execution Based on the work by Maurer and Massey [15] where they
time. The design of new hashes should follow, we believe, have proved that a cascade of two ciphers is as strong as its
an evolutionary rather than a revolutionary paradigm. first cipher. Therefore, we have adjudicated to start with the
Consequently, changes to the original structure are kept to a encryption function. The one-way hash function is then used
minimum to utilize the confidence previously gained with recursively to generate the sub-keys based on two large
SHA-1 and its predecessors MD4[14] and MD5. However,
numbers that are derived from the user key. In this case, the
the main improvements included in MDP-192[13] are: The
increased size of the hash; that is 192 bits compared to 128 encryption function requires some initial agreed-upon vector
and 160 bits for the MD-5 and SHA-1 schemes. The security value (IV), [16], [17], [18] to complete the encryption
bits have been increased from 64 and 80 to 96 bits. The process. This IV can be regarded as a long-term key or even
message block size is increased to 1024 bits providing faster a group-key that can be changed on a regular basis or when
execution times. The message words in the different rounds a member leaves the group. The combination of the
are not only permuted but computed by xor and addition
encryption function and the one-way function are used as
with the previous message words. This renders it harder for
local changes to be confined to a few bits. In other words, the required pseudo random number generator PRG. It is
individual message bits influence the computations at a worth pointing out that the design of the cipher intentionally
large number of places. This, in turn, provides faster allows the change of the one-way hash if successfully
avalanche effect and added security. Moreover, adding two attacked.
nonlinear functions and one of the variables to compute
another variable, not only eliminates the possibility of
certain attacks but also provides faster data diffusion. The 6. The Algorithm
fifth improvement is based on processing the message
blocks employing six variables rather than four or five The algorithm can be formally described as shown in the
variables. This contributes to better security and faster next few lines.
avalanche effect. We have introduced a deliberate
asymmetry in the procedure structure to impede potential Algorithm: STONEMETAMORPHIC
and some future attacks. The xor and addition operations do
INPUT: Plain text message P, User Key K, Block Size B
not cause appreciable execution delays for today’s
processors. Nevertheless, the number of rotation operations, OUTPUT: Cipher Text C
in each branch, has been optimized to provide fast Algorithm body:
avalanche with minimum overall execution delays. To verify
the security of this hash function, we discuss the following Begin
simple theorem [13]: Begin key schedule
4 (IJCNS) International Journal of Computer and Network Security,
Vol. 1, No. 2, November 2009

1. Read user key; 7. Software Implementation


2. Encrypt user key by calling encrypt function and using
The pseudo C-function [19] that represents such a table is
the initial agreed-upon values as the random input to this
given by:
function; encrypt (plain-text-bit, key-bit, selection-bit0, selection-bit1,
3. Read the values of the large numbers a and b from the rot-bit)
encrypted key; {
4. Generate a sub-key by calling the hash one-way function a1= plain-text-bit ^ key-bit;
and using the constants a, b; e1= a1 & (~selection-bit0) & (~selection-bit1);
5. Store the generated value of the subkey; b1= ~ plain-text-bit;
f1= b1 & (selection-bit0) & (~selection-bit1);
6. Repeat steps 5 and 6 to generate the required number of
g1= rot-bit & (~selection-bit0) & (selection-bit1);
subkeys;
h1= plain-text-bit & (selection-bit0) & (selection-
End key schedule; bit1);
cipher-bit = e1|f1|g1|h1;
Begin Encryption return (cipher-bit);
7. Read a block B of the message P into the message cache; }
8. Use the next generated 192-bit key to bit-wise encrypt the
plain text bits by calling the encrypt function; 8. Hardware Implementation
9. If message cache is not empty, Goto step 8;
The hardware version of the CLU, previously shown in
10. Else if message cache is empty: Figure 2, is FPGA-implemented. We have used Altera
If message not finished Quartus II 6.1 Web Edition, [20]. The average delay per
10.1 Load next block into message cache; byte was found to be 4.33 cycles per byte. Straightaway, if
we use four CLUs in-parallel, this delay will be
10.2 Goto 8;
approximately equal to one cycle per byte. This proposed
Else if message is finished then halt; parallel configuration is shown in Figure 6.
End Encryption;
End Algorithm.

Function ENCRYPT
Begin
1. Read next message bit;
2. Read next key bit from sub-key;
3. Read selection bits from sub-key;
4. Read rotation selection bits from sub-key;
5. Use selection & rotation bits to select and perform
Figure 6. The proposed parallel configuration
operation: XOR, INV, ROR, NOP;
6. Perform the encryption operation using plaintext bit and A representative code of the Verilog file used to FPGA-
sub-key bit to get a cipher bit; implement the CLU is given by:
7. Store the resulting cipher bit; module metamorph (p1,k1,s0,s1,p2,c1);
End; input p1,k1,s0,s1,p2;
output c1;
As seen from the above formal description of the algorithm, xor(a1,p1,k1);
and(e1,a1,~s0,~s1);
it simply consists of a series of pseudo random calls of the
assign b1= ~p1;
encryption function. However, each call will trigger a and(f1,b1,s0,~s1);
different bitwise operation. The simplicity of this algorithm and(g1,p2,~s0,s1);
readily lends itself to parallelism. This parallelism can be and(h1,p1,s0,s1);
achieved using today’s superscalar multi-threading or(c1,e1,f1,g1,h1);
capabilities or multiple data paths on a specialized hardware endmodule
such as FPGA with their contemporary vast gate count.
(IJCNS) International Journal of Computer and Network Security, 5
Vol. 1, No. 2, November 2009

9. Comparison with Chameleon Polymorphic operations provides the metamorphic nature of the cipher.
Cipher This, in turn, hides most statistical traces that can be
utilized to launch these attacks. Each key has its own unique
“weaknesses” that will affect the new form of the algorithm
As seen from the given analysis and results, one can
utilized. Thus, different keys will produce completely
summarize the various characteristics of this cipher, when
different forms (meta-forms) of the cipher. Even the cipher
compared to Chameleon Polymorphic Cipher [Saeb09], as
designer cannot predict in advance what these forms are. It
follows:
can be easily shown that the probability of guessing the

Table 2: A comparison between Stone Metamorphic Cipher correct sequence of operations is of the order of , where
and Chameleon Polymorphic Cipher w is the word size and N is the number of rounds. That is
for, say, a word size of 8 bits, the probability of guessing this
Cipher Chameleon-192 Stone-192 word only is . For a block size of 64 bits, this
Characteristic Polymorphic Metamorphic
Cipher Cipher probability is . Consequently, statistical analysis is not
User key size Variable Variable adequate to link the plain text to the cipher text. With
Sub-keys 192-bit K, S(K) 192-bit K,
different user keys, we end up with a different “morph” of
S(K), S’(K)
Estimated 10 cycles/byte 6 cycles/byte the cipher; therefore, it is totally infeasible to launch attacks
maximum by varying keys or parts of the key. The only option left to
delay per byte the cryptanalyst is to attack the key itself. To thwart this
Estimated 9.1 cycles/byte 4.3 cycles/byte type of attacks, we have used the encryption function as a
average delay first stage in a cascade of the encryption function and the
per byte one-way function. Regarding the key collision probability, it
PRG One-way One-way
was shown in section 4 that the key collision probability is
(Sub-key Function cascaded with
Generation) the Encryption negligible when a 192-bit hash is applied. Moreover, the
Function cryptanalyst has a negligible probability of guessing the
Structural Sequential: Concurrent: correct form of the algorithm utilized. As was previously
Sel-1, ROT, Sel- XOR, ROT, discussed, the simple structure of the proposed cipher
0 INV, NOP provides a foundation for efficient software and hardware-
Number of Variable (key- Variable (key- based implementation. Depending on the word or the block
rounds dependent with dependent with
size required, it is relatively easy to parallelize the data path
minimum equal minimum equal
to 5 rounds) to 8 rounds) either using multi-threading on a superscalar processor or
Algorithm Yes No by cloning this path on the FPGA material. Undeniably,
Template (key changes (key selects using the same encryption process and sub-keys for each
operation operations) block is a disadvantage from a security point of view. Still,
parameters) this is exactly the same issue with block ciphers in general.
Parallelizable Yes Yes
The advantage obtained from such a configuration, similarly
( some (operations are
sequential selected to block ciphers, is saving memory and communication
operations) concurrently) bandwidth on the chip and the channel levels. The pseudo
Security Secure Improved random selection of operations and the key-dependent
Security (pseudo number of rotations provide a barrier against pattern leakage
random and block replay attacks. These attacks are quite frequent in
sequence of multi-media applications. Using ECB mode, when
operations and
encrypting images with conventional ciphers, a great deal of
more secure
PRG) the structure of the original image is preserved [3]. This
contributes to the problem of block replay. However, the
selective operations allow the cipher to encrypt images with
no traces of the original image. This is a major advantage of
10. Security Analysis
the Stone Metamorphic Cipher bit-level operations when
applied to multimedia files.
One claims that differential cryptanalysis, linear
cryptanalysis, Interpolation attack, partial key guessing
attacks, and side-channel attacks, barely apply in this
metamorphic cipher. The pseudo random selection of
6 (IJCNS) International Journal of Computer and Network Security,
Vol. 1, No. 2, November 2009

11. Summary & Conclusions 1 0 0 1 1 NOP 1

1 0 1 0 0 XOR 1
We have presented a metamorphic cipher that is altogether
key-dependent. The four bit-balanced operations are pseudo- 1 0 1 0 1 INV 0
randomly selected. Known statistical attacks are barely
1 0 1 1 0 ROR 1
applicable to crypt-analyze this type of ciphers. The
proposed simple structure, based on the crypto logic unit 1 0 1 1 1 NOP 1
CLU, can be easily parallelized using multi-threading
1 1 0 0 0 XOR 0
superscalar processors or FPGA-based hardware
implementations. This presented CLU can be viewed as a 1 1 0 0 1 INV 0
nonlinearity-associated filtering of the data and key streams.
1 1 0 1 0 ROR 0
The PRG, constructed from a cascade of the encryption
function and the one-way hash function, provides the 1 1 0 1 1 NOP 1
required security against known key attacks. On the other
1 1 1 0 0 XOR 0
hand, it easily allows the replacement of the hash function if
successfully attacked. The cipher is well-adapted for use in 1 1 1 0 1 INV 0
multi-media applications. We trust that this approach will 1 1 1 1 0 ROR 1
pave the way for key-driven encryption rather than simply
using the key for sub-key generation. 1 1 1 1 1 NOP 1

Appendix A: The truth table of the CLU


References
Pi Ki S’1 S’0 S1 S0 OP Ci
→ Pj [1] Magdy Saeb, “The Chameleon Cipher-192: A
0 0 0 0 Polymorphic Cipher,” SECRYPT2009, International
0 XOR 0
Conference on Security & Cryptography, Milan, Italy;
0 0 0 0 1 INV 1 7-10 July, 2009.
[2] Auguste Kerckhoffs, “La cryptographie militaire,”
0 0 0 1 0 ROR 0 Journal des sciences militaire, vol. IX, pp. 5-83, Jan.
0 0 1 1 1883, pp.161-191, Feb. 1883.
0 NOP 0
[3] Swenson, C., Modern Cryptanalysis; Techniques for
0 0 1 0 0 XOR 0 Advanced Code Breaking, Wiley Pub. Inc., 2008.
[4] Merkle, R.C., “Fast Software Encryption Functions,”
0 0 1 0 1 INV 1
Advances in Cryptology-CRYPTO ’90 Proceedings,
0 0 1 1 0 ROR 1 pages.476-501, Springer Verlag, 1991.
[5] Massey, J. L., “On Probabilistic Encipherment,” IEEE.
0 0 1 1 1 NOP 0 Information Theory Workshop, Bellagio, Italy, 1987.
0 1 0 0 0 XOR 1 [6] Massey, J.L., “Some Applications of Source Coding in
Cryptography,” European Transactions on
0 1 0 0 1 INV 1 Telecommunications, vol. 5, No. 4, pp.7/421-15/429,
1994.
0 1 0 1 0 ROR 0 [7] Rogaway, P., Coppersmith, D., “A Software-oriented
0 1 0 1 1 NOP 0 Encryption Algorithm,” Fast Software Encryption
Cambridge Security workshop Proceedings, Springer-
0 1 1 0 0 XOR 1 Verlag, pages 56-63, 1994.
[8] Bruce Schneier, “Description of a New Variable-Length
0 1 1 0 1 INV 1 key, 64-bit Block Cipher (Blowfish),” Fast Software
0 1 1 1 0 ROR 1 Encryption, Cambridge Security Workshop
Proceedings, Springer-Verlag, pages 191-204, 1994.
0 1 1 1 1 NOP 0 [9] Bruce Schneier, John Kelsey, Doug Whiting, David
Wagner, Chris Hall, Niels Ferguson, “ Twofish: A 128-
1 0 0 0 0 XOR 1 bit Block Cipher,” First AES conference, California,
1 0 0 0 1 INV 0 US., 1998.
[10] Ralph C. Merkle, June, Secrecy, Authentication and
1 0 0 1 0 ROR 0 Public Key Systems, Ph.D. Dissertation, Stanford
University, 1979.
(IJCNS) International Journal of Computer and Network Security, 7
Vol. 1, No. 2, November 2009

[11] Rivest, R.L., “The MD5 Message Digest Algorithm,”


RFC 1321, 1992.
[12] Hans Dobbertin, Antoon Bosselaers, Bart Preneel,
“RIPEMD-160: A Strengthened Version of RIPEMD,”
Fast Software Encryption, LNCS 1039, Springer-
Verlag, pages 71–82, 1996.
[13] Magdy Saeb, “Design & Implementation of the
Message Digest Procedures MDP-192 and MDP-384,”
ICCCIS2009, International Conference on
Cryptography, Coding & Information Security, Paris,
June 24-26, 2009.
[14] Rivest, R.L., “The MD4 Message Digest Algorithm,”
RFC 1186, 1990.
[15] Ueli Maurer, James Massey, “Cascade Ciphers: The
Importance of Being First,” Journal of Cryptography,
vol. 6, no. 1, pp. 55-61, 1993.
[16] Discussions by Terry Ritter, et al., Accessed 2007.
http://www.ciphersbyritter.com/LEARNING.HTM.
[17] Erik Zenner, On Cryptographic Properties of LFSR-
based Pseudorandom Generators, Ph.D. Dissertation,
University of Mannheim, Germany, 2004.
[18] Erik Zenner, “Why IV Setup for Stream Ciphers is
Difficult,” Dagstuhl Seminar Proceedings 07021,
Symmetric Cryptography, March14, 2007.
[19] Michael Welschenbach, Cryptography in C and C++,
Apress, 2005.
[20] S. Brown, Z. Vranesic, Fundamental of Digital Logic
with Verilog Design, McGraw-Hill International
Edition, 2008.

Author Profile

Magdy Saeb received the BSEE. School


of Engineering, Cairo University, in
1974; the MSEE. and Ph.D. in Electrical
& Computer Engineering, University of
California, Irvine, in 1981 and 1985,
respectively. He was with Kaiser
Aerospace and Electronics, Irvine
California, and The Atomic Energy
Establishment, Anshas, Egypt. Currently, he is a professor
in the Department of Computer Engineering, Arab Academy
for Science, Technology & Maritime Transport,
Alexandria, Egypt, (on leave) to Malaysian Institute of
Microelectronic Systems (MIMOS), Kuala Lumpur,
Malaysia. His current research interests include
Cryptography, FPGA Implementations of Cryptography and
Steganography Data Security Techniques, Encryption
Processors, Computer Network Reliability, Mobile Agent
Security. www.magdysaeb.net.
8 (IJCNS) International Journal of Computer and Network Security,
Vol. 1, No. 2, November 2009

Personal Authentication based on Keystroke


Dynamics using Ant Colony Optimization and Back
Propagation Neural Network
Marcus Karnan1 and M. Akila2
1
Department of Computer Science and Engineering,
Tamilnadu College of Engineering, Coimbatore, India
karnanme@yahoo.com
2
Department of Information Technology,
Vivekanandha College of Engeneering for Women, Tiruchengode, India
akila@ nvgroup.in

Dynamic keystroke analysis implies a continuous or periodic


Abstract: The need to secure sensitive data and computer
systems from intruders, while allowing ease of access for monitoring of issued keystrokes and is intended to be
authenticate user is one of the main problems in computer performed after the log-in session also.
security. Traditionally, passwords have been the usual method There are two phases namely extraction phase and
for controlling access to computer systems but this approach has verification phase. During the feature extraction phase
many inherent flaws. Keystroke dynamics is a relatively new [4][12][13][14] user keystroke features from one’s password
method of biometric identification and provides a comparatively is captured, processed and stored in a reference file as
inexpensive and low profile method of hardening the normal prototypes for future use by system in subsequent
login and password process. Here, the Ant Colony Optimization authentication operations.
is used to reduce the redundant feature values and minimize the During the verification phase[15][16] user keystroke
search space. It reports the results of experimenting Ant Colony features are captured, processed in order to render an
Optimization technique on keystroke duration, latency and
authentication decision based on the outcome of a
digraph for feature subset selection. Back Propagation Neural
Network is used for classification and the performance is tested.
classification process of the newly presented feature to the
Optimum feature subset is obtained using keystroke digraph pre-stored prototypes (reference templates) [17][18]. It would
values when compared with the other two feature values. be necessary for the user to type his/her name or password a
number of times in order for the system to be able to extract
Keywords: Ant Colony Optimization Algorithm (ACO), the relevant features that uniquely represent the user.
Backpropagation Algorithm, False Acceptance Rate, Feature
However, the task of typing one’s name or password over
Extraction, Feature Subset Selection, False Rejection Rate.
and over is both tiring and tedious in the feature extraction
phase, which could lead users to alter their normal typing
1. Introduction pattern. Thus, most systems based on biometrics are required
Access to computer systems is usually controlled by user to work with a summarized set of information from which to
accounts with usernames and passwords. Such scheme has extract knowledge. In order to reduce this problem, we could
little security [1][2] if the information falls to wrong hands. eliminate some features of the original dataset, selecting only
Key cards or biometric systems [3][4][5][6], for example the best ones in terms of class cohesion.
fingerprints [7] is being used nowadays to improve the
security. Biometric methods measure biological and Feature subset selection [9][19][20] is applied to high
physiological characteristics to uniquely identify dimensional data prior to classification. Feature subset
individuals. The main drawback of most biometric methods selection is essentially an optimization problem, which
is that they are expensive to implement, because most of involves searching the space of possible features to identify
them require specialized hardware to strengthen security. one that is optimum or near-optimal with respect to certain
But they require quite expensive additional hardware. On performance measures, since the aim is to obtain any subset
the other hand keystroke dynamics [8] consist of many that minimizes a particular measure (classification error
advantages like (i) It can be used without any additional [21], for instance). In order to reduce the complexity and to
hardware. (ii) Inexpensive (iii) Hardening the security. increase the performance of the classifier the redundant and
irrelevant features are reduced from the original feature set.
Keystroke dynamics include several different Many feature subset selection [22] [23] approaches are
measurements[9][10][11] such as (i) Duration of a keystroke proposed in the previous studies.
or key hold time (ii) Latency of keystrokes or inter-keystroke Relevant research results of past decades are detailed in
times (iii) Typing error (iv) Force keystrokes etc. Keystroke this section. Yu and Cho [24] propose a GA-SVM based
analysis [12] is of two kinds static and dynamic. Static wrapper approach for feature subset selection in which GA is
keystroke analysis essentially means that the analysis is employed to implement a randomized search and SVM, an
performed on typing samples produced using the same excellent novelty detector with fast learning speed, is
predetermined text for all the individuals under observation. employed as a base learner. The degree of diversity and
(IJCNS) International Journal of Computer and Network Security, 9
Vol. 1, No. 2, November 2009

quality are guaranteed, and thus they gave result in an Standard deviation (σi) = (1/N-1)Σ|x[i]-µ[i]| i=1..N (2)
improved model performance and stability. Ki-seok Sung and For instance, for the password “ANT” the timing
Sungzoon Cho [25] propose one step approach similar to that information of duration for user x is [205, 250, 235] ms. Fig
of Genetic Ensemble Feature Selection (GEFS), yet with a 1 shows the measurement of duration, latency and digraph
more direct diversity term in the fitness function and SVM of keystrokes of the password “ANT” of user x.
as base classifier and similar to that of Yu and Cho [24]. In
particular, so called "uniqueness" term is used in a fitness 3. Feature Subset Selection
function, measuring how unique each classifier is from others
in terms of the features used. To adapt SVM authors use 100 samples are typed by user for 100 number of times
Gaussian kernel. GA was used to filter the data and to carry during feature extraction. During the verification phase, it
out a selection of characteristics. It reports an average FAR of takes more time to verify all the 100 number of features. To
15.78% with minimum FAR of 5.3% and maximum FAR of reduce this time complexity we are using feature subset
20.38% for raw data with noise. Gabriel et al [26] designed a selection methods. In feature subset selection, we extract the
hybrid system based on Support Vector Machines (SVM) and optimized features from the 100 number of features. It is
Stochastic Optimization Techniques. Standard Genetic essentially an optimization problem, which involves
Algorithm (GA) and Particle Swarm Optimization (PSO) searching the space of possible features to identify one that
variation was used and produced a good result for the tasks of is optimum. Various ways to perform feature subset
feature selection. Standard GA and PSO variation was used selection has been studied earlier [4][24][25][26] for various
and produced a good result for the tasks of feature selection applications. Here, we propose Ant Colony Optimization to
and personal identification with an FAR of 0.81% and IPR of select the feature subset.
0.76%. Glaucya et al [4] used weighted probability measure 3.1 Ant Colony Optimization
by selecting N features of the features vector with the minors Ant algorithms [30][31][32] was first proposed by Dorigo
of standard deviation, eliminating the features less and colleagues as a multi-agent approach to difficult
significant. They obtained optimum result using 90% of the combinatorial optimization problems such as the Traveling
features with 3.83% FRR and 0% FAR. Salesman Problem (TSP) and the Quadratic Assignment
In Section II, the feature extraction phase is discussed. Problem (QAP). There are currently various activities in the
Section III explains the feature subset selection method. scientific community to extend and apply ant-based
Section IV discusses the classification techniques and in algorithms to many different discrete optimization
Section V conclusion is presented. problems. The ACO heuristic [31][33][34] has been inspired
by the observation on real ant colony’s foraging behavior,
2. Feature Extraction and on that ants can often find the shortest path between
To capture a keystroke, it is necessary for users to type their food source and their nest. Ant individuals transmit
password 100 number of times. The 100 samples were got in information through the volatile chemical substances which
a week period. The system capture these features using three ants leave in his passing path and also known as the
methods regarding the time (in milliseconds) that a “pheromone” and then reach the purpose of finding the best
particular user maintains the key pressed (Duration time) way to search food sources. An ant encountering a
and the time elapsed between releasing one key and previously laid trail can detect the dense of pheromone trail.
pressing the next (latency time) and the combination of the It decides with high probability to follow a shortest path and
above, Digraph. The data was collected from 27 participants reinforce that trail with its own pheromone. The large
with different passwords. The mean and standard deviation amount of pheromone is on the particular path, the large
values were measured. probability is that an ant selects that path and the paths
pheromone trail will become denser. The Ant Colony
Optimization algorithm is explained as follows:
Step 1. Get the feature values a[x] from duration / latency
/ digraph of keystrokes.
Step 2. Calculate the fitness function f[x] by the following
equation for every a[x].
f[x] = 1 / (1+a[x]) (3)

Figure 1. Measurement of duration, latency and Initialize the following


digraph a. NI = 100 (Number of iterations)
Keystroke recording application software was developed in b. NA = 20 (Number of Ants)
C# for the measurement of duration, latency and digraph of c. T0 = 0.001 (Initial pheromone value for
each user sample in the raw data file. Upon pressing submit, every a[x])
a raw-data text file is generated. During the creation of the d. ρ = 0.9 (rate of pheromone evaporation
raw data file, the mean (µ) and standard deviation () parameter for every a[x])
[27][28][29] of each feature (i) of the pattern set (x) is Step 3. Store the fitness function values in S, where
calculated for N samples in agreement with the following S = {F[x], T0, flag}
equations: where flag column mentions whether the feature
is selected by the ant or not.
Mean (µ i) = (1/N)Σx[i] i=1..N (1) Step 4. The following is repeated for NI times
10 (IJCNS) International Journal of Computer and Network Security,
Vol. 1, No. 2, November 2009

i. A random feature value g[x] in a[x]is 1) The input layers propagate a particular input value
selected for each ant with the criteria that the component to each node in the Hidden layer.
particular feature value should not have been 2) Hidden layers compute output values which become
selected previously. inputs to the output layer.
ii. Selected feature value’s, pheromone value is 3) The output layers compute the network output for
updated by the following: the particular input values.
Tnew = (1-r) x Told + r x T0, for g[x] The forward pass produces an output vector for a given
Where Tnew and Told are the new and input vector based on the current state of the network
old pheromone value of the feature value. weights.
iii. Find Obtain Lmin = min(g[x]) where Lmin is Since the network weights are initialized to random
the Local minimum values, it is unlikely that reasonable outputs will result
iv. Check if Lmin < = Gmin before training.
then assign Gmin = Lmin The weights are adjusted to reduce the error by
else no change in Gmin value where Gmin propagating the output error backwards through the
is the Global minimum. network.
v. Select the best feature g[y], whose solution is This process is where the backpropagation neural
equal to the Global minimum value at the end network gets its name and is known as the backward pass:
of the last iteration. 1) Compute error values from the output layer. This
vi. The selected g[y]’s pheromone value is can be computed because the desired output is
globally updated by the following equation known.
Tnew = (1-a) x Told + a x Told, for g[y] 2) Compute the error for the hidden layer nodes. This
where α is a rate of pheromone is done by attributing a portion of the error at each
evaporation parameter, output layer node to the middle layer nodes which
= 1 / (Gmin) feed that output node. The amount of error due to
The remaining ants their pheromone is each middle layer node depends on the size of the
updated as: weight assigned to the connection between the two
Tnew = (1-a) x Told nodes.
vii. Finally, the Gmin value is stored as optimum 3) Adjust the weight values to improve network
value. performance.
At last, the ant colony collectively marks the shortest 4) Compute overall error to test network performance.
path, which has the largest pheromone amount. Such simple The training set is repeatedly presented to the network
indirect communication way among ants embodies actually and the weight values are adjusted until the overall error is
a kind of collective learning mechanism which is used in below a predetermined tolerance.
our experiment. The Back Propagation algorithm [42][43] can be
implemented in two different modes: online mode and batch
4. Classification using Back Propagation mode. In the online mode the error function is calculated
Neural Network after the presentation of each input timing vector and the
error signal is propagated back through the network,
Neural Networks are simplified models of the biological modifying the weights before the presentation of the next
nervous system, which is a computing, performed like a timing vector. This error function is usually the Mean
human brain. A Neural network [35] has a parallel Square Error (MSE) of the difference between the desired
distributed architecture with a large number of nodes and and the actual responses of the network over all the output
connections. Each connection points from one node to units.
another are associated with weights. The backpropagation Then the new weights remain fixed and a new timing
neural network is a network of simple processing elements vector is presented to the network and this process continues
working together to produce a complex output. [The back until all the timing vectors have been presented to the
propagation paradigm[36] has been tested in various network. The presentation of all the timing vectors is
applications such as bond rating, mortgage application usually called one epoch or a single iteration. In practice
evaluation, protein structure determination, signal many epochs are needed before the error becomes acceptably
processing and handwritten digit recognition small. In the batch mode the error signal is calculated for
[37][38][39][40]. It can learn difficult patterns such as those each input timing vector and the weights are modified every
found in typing style, and can recognize these patterns even time the input timing vector is been presented. Then the
if they are variations of the ones it initially learned. The error function is calculated as the sum of the individual
backpropagation neural network uses a training set MSE for each timing vector and the weights are
composed of input vectors and a desired output (here the accordingly modified (all in a single step for all the timing
desired output is usually a vector instead of a single value. vectors) before the next iteration.
These elements or nodes are arranged into layers: input, In the forward pass, outputs are computed and in the
hidden, and output. backward pass weights are updated or corrected based on the
The output from a backpropagation neural network is errors. The development of the Back Propagation algorithm
computed using a procedure known as the forward pass is a landmark in neural networks in that it provides a
[41]: computationally efficient method for the training of multi-
(IJCNS) International Journal of Computer and Network Security, 11
Vol. 1, No. 2, November 2009

layer perceptron. The general procedure of back propagation 5.1 Results of ACO
algorithm is as follows:
From 100 samples, fifty best fitted values were selected to
Initially, the inputs and outputs of the feature subset
reproduce best new fit population. Partial experimental
selection algorithms are normalized with respect to their
results of ACO are shown in Table 1.
maximum values.
For instance the mean and standard deviation timing of
Step 1: ACO feature subset selection algorithm values
the password “COMPUTER" is computed initially in the
are considered as input.
feature extraction phase. The feature subset from the feature
Step 2: These feature values are normalized between 0
subset selection phase using ACO is computed as follows:
and 1 and assigned to input neurons.
Step 3: Wih and Who Represents the weights to the link of
input nodes to hidden nodes connection, hidden
Step 1: Calculation of Fitness value for Duration:
nodes to output nodes respectively. Initial
Mean (µ i) = (1/N) Σ i =1N x (i) = 1.349375= x (i)
weights are assigned randomly between -0.5 to
Fitness value f (i) =1 / 1 + x (i) = 1/1+1.349375 = 0.425645
0.5.
Step 4: Input to hidden neuron (Ii) is multiplied with
Step 2: Calculation of Local Minimum for Duration:
weight wih.
Initially the fitness value f(x) is directly assigned as Local
Step 5: The output from each hidden neuron (Oh) is
Minimum (Lmin) for the first value (say f [1]). Then the
calculated using sigmoid function
next fitness value f (x) (say f [2]) is compared with f (1).
S1 = 1 / (1 + e-λx) (4)
The minimum is found and is replaced with the Local
where λ =1 & x = ΣwihIi, wih is the weight
minimum value.
assigned between input and hidden layer
For mean let f [1] = 0.425645
and Ii is the input value to the input neurons.
Assign f [1] = Lmin = 0.425645
Step 6: The input to the output layer (Io) is multiplied by
The next value let f [2] = 0.416898
weight who with output of hidden Oh.
Check whether f [1] less then or equal to the value a [2].
Step 7: The output from the output layer (Oo) is
If the condition is true, assign Lmin =f [1].
calculated using the sigmoid function,
Otherwise, Lmin= f [2]
S2 = 1 / (1 + e-λx) (5)
Here, in this sample, as (0.425645<=0.416898)
where λ =1 & x = ΣwhoOh where who is the
Lmin = 0.416898
weight assigned i between hidden and output
layer and
Step 3: Calculation of Local Pheromone Update for
Oh is the output value from hidden neurons.
Duration:
Step 8: Error (e) is found using subtracting S2 from the Tnew = (1–ρ) x Told + ρ x T0, where Tnew = new pheromone
desired output. Using the error (e) value, the weight rate, where Told = old pheromone rate, T0 = Initial
change is calculated as: pheromone value. Initially, Told = 0.001 and T0 = 0.001.
Delta = e x S2 x (1 – S2) (6) For mean, first Local pheromone is updated as
Step 9: Weights are updated using the delta value. Tnew = (1- 0.9) x 0.001 + 0.9 x 0.001 = 0.1 x 0.001 + 0.0009
Who = Who + (n x delta x S1) = 0.00100
Wih = Wih + (n x delta x Ii) (7) Note: Told value change due to the previous Tnew value i.e.
where n is the learning rate and I is the input Told = 0.00100.
value.
Step 10: Perform steps (5) to (9) with the updated weights,
Step 4: Calculation of Global minimum for Duration:
till the target output is equal to the desired output. Global minimum (Gmin) is assigned as Lmin value initially
Then check the error (e) value and update the
(i.e. Lmin = Gmin). Next the value in the Gmin is compared
weights. After several iterations, when the with Lmin, to find the minimum amongst them.
difference between the calculated output and the For mean, Lmin = 0.416898
desired output is less then the threshold value, the Initially, Gmin = Lmin. Therefore Gmin = 0.416898
iteration is stopped in the above algorithm. For next feature value, condition should be satisfied for
Gmin i.e. (Gmin <= Lmin). So, Gmin = 0.416898

5. Results and Discussion Step 5: Calculation of Global pheromone Update for


Mean and standard deviation of duration, latency and Duration:
digraph for each sample is measured. Ant Colony algorithm The selected Gmin pheromone value is updated as follows:
is used for selecting the optimum feature for each participant Tnew = (1 – ρ) x Told + ρ x Told,
and the selected features are considered for classification. ρ=rate of pheromone evaporation parameter, =1/ (Gmin)
For mean, Global pheromone is updated as
Tnew = (1–0.9) x 0.001+0.9 x (1/0.74167690) x 0.001
= (0.0001) + (0.9 x 2.39866 x 0.001) = 0.002259
12 (IJCNS) International Journal of Computer and Network Security,
Vol. 1, No. 2, November 2009

Table 1: Feature subset selection using ACO


Mean (Gmin) Standard Deviation (Gmin)
Duration Latency Digraph Duration Latency Digraph
x(µ) F(µ) x(µ) F(µ) x(µ) F(µ) x(σ) F(σ) x(σ) F(σ) x(σ) F(σ)
1.570 0.389 1.618 0.381 2.925 0.254 0.348 0.741 0.383 0.722 0.723 0.580
0 1 7 9 0 8 2 7 2 9 8 1
1.398 0.416 1.597 0.385 2.773 0.265 0.344 0.743 0.360 0.735 0.709 0.585
7 9 0 0 0 0 5 7 3 1 4 0
1.356 0.424 1.583 0.387 2.562 0.280 0.336 0.748 0.322 0.756 0.675 0.596
6 3 2 1 3 7 3 3 7 0 4 8
1.349 0.425 1.382 0.419 2.531 0.283 0.329 0.752 0.295 0.772 0.654 0.604
3 7 8 7 9 1 7 0 0 2 4 4
1.347 0.426 1.273 0.439 2.526 0.283 0.348 0.741 0.292 0.773 0.568 0.637
6 0 6 8 0 6 2 7 7 6 3 6
1.322 0.430 1.266 0.441 2.502 0.285 0.316 0.759 0.284 0.778 0.590 0.628
0 7 2 2 6 5 2 8 6 5 9 6

Step 6: The remaining Ants Pheromone update for Step 4: Output of Hidden
Duration: Compute sigmoid function as
Tnew = (1 – ) x Told S1 = 1 / (1 + e-λx), where λ = 1 & x = Σi wih Oi
For mean, Global pheromone is updated as
Tnew = (1-0.9) x 0.001 = 0.1 x 0.001 = 0.0001
Similarly the values for the latency and digraph are
calculated as above. S1 = 1/(1+e-(-0.3418387+-0.1953364))
S1 = 0.3688
(a) Results of Back Propagation Neural Network (BPNN)
Step 5: Weight Between Hidden to Output
Back Propagation Neural Network seems much more Assign the weights randomly between Hidden to Output
suitable for pattern classifier because it can solve a non- Layer as Who = (0.6, -0.5)
linear problem and for its ability to classify pattern and it is
better in generalization Step 6: Input of Output Layer
Let Ii be Input of Input, Oi be Output of Input, Ih be Input Multiply the weight between Hidden and Output Layer
of Hidden, Oh be Output of Hidden, Io be Input of Output, Oi (Who) and the Output of Hidden Layer
be Output of Output. After applying the BPNN Learning the Io =S1 *Who = *(0.6) = 0.22128
following calculations are done. The partial results are
displayed in Table 2. It displays the initial input and random Step 7: Output of Output Layer
weight between input to hidden, output of hidden using Sigmoid Function of Output Layer is calculated as follows:
sigmoid function, random weight between hidden to output, Oo= 1 / (1 + e-λx), where λ = 1 & x = Σi who Oh
and the output of output layer using sigmoid function value. Oo= 1/ (1+ e-(0.350259+-0.32626)) = 0.718669
This value is compared with target output .01 and error
value is displayed. The adjusted weights between input to Step 8: Error Signal
hidden and hidden to output is also displayed. After Compute the Error Signal using Error = (To- Oo)2 where To
completing the 30th iteration using the duration, latency and is Target Output and is assigned -0.1 and Oo is Output of
digraph the threshold value is obtained from maximum to Output and is assigned as 0.459700.
minimum output within the 30 iterations. Error = (To- Oo) 2 = (0.1-0.5060)2 =0.382751

Computation of error values in Forward Pass Computation of updated weights in Backward Pass
Step 1: Input of Input Weights are adjusted to achieve the Target Output and
Mean and Standard Deviation is the Input of input and Reduce the Error Value.
output of input layers and is f (i). D=( To- Oo1)( Oo1)(1- Oo1) = (0.1-0.5060) (0.5060) (1-
For instance let the Input f (i) = (0.488341, 0.969959) 0.5060)
D = -0.10148
Step 2: Weight between Input to Hidden
Assign weights randomly between Input to Hidden layers Step 9: Output to Hidden Weight:
say, Y = S1*D
Wih = (-0.7, 0.4, -0.7, 0.6,) Y = [-0.3418387
Assume two weights for each single node. Multiply each 0.1953364] x (-0.10148)
output of input into weight that is assigned randomly. Y = [0.03468
-0.0198]
Step 3: Input of hidden [w] 1= [w]0 +η[y][Assume η=0.6]
Ih = Oi * Wih [w] 0= 0
Iih 1 = 0.488341*-0.7= -0.3418387 [w] 1= 0.6 x [0.03468
(IJCNS) International Journal of Computer and Network Security, 13
Vol. 1, No. 2, November 2009

-0.0198]
[w] 1 = [0.436924 [-0.3418387
0.918542] 0.1953364] x 0.01653
[v]1 = α [v]0+ η[x]
Step 10: Hidden to Input Weight:
The Adjusted weight between Hidden to Input [v]1 = [-0.704411
[e]=(w)(D)=(-0.7) (-0.10148) = 0.071036 0.392331
Similarly the remaining four weights are multiplied by the 0.591240
error Difference Value (D). 0.584767]
[D*]= [e][ OH][1- OH]= [0.071036][ 0.3688][1-0.3688]
= 0.01653
[x] = [S1][D*]
Table 2: Intermediate result of BPNN

Output of Adjusted Adjusted


Sigmoid Difference
nput Ii Wih Ih hidden Who Io Target Weight Weight
(Output) oo (Error rate)
(Hidden) (Woh) (Whi)
Duration
-0.7 -0.2723 -0.703482
Mean 0.3891 0.46877 0.6 0.281262 0.56118
0.4 0.15564 0.402675
0.50073 0.1 0.160586
0.6 0.44502 0.593362
SD 0.7417 0.70889 -0.5 -0.354445 -0.5388
0.6 0.44502 0.605099
Latency
-0.7 -0.29183 0.703727
Mean 0.4169 0.468773 0.6 0.2812638 0.561112
0.4 0.16676 0.402848
0.499701 0.1 0.159761
0.6 0.44622 0.593351
SD 0.7437 0.709393 -0.5 -0.3546965 -0.538888
0.6 0.44622 0.605081
Digraph
-0.7 -0.2970 -0.703791
Mean 0.4243 0.468222 0.6 0.2809332 0.561059
0.4 0.16972 0.402892
0.499448 0.1 0.159558
0.6 0.44898 0.593313
SD 0.7483 0.710530 -0.5 -0.355265 -0.538941
0.6 0.44898 0.605101

Similarly twenty five weights are calculated and old weights samples. These results suggest that digraph may in general
of Input to Hidden Layer are replaced. After training the provide a better characterization of the typing skills than
user typing pattern, the threshold values for each trained latency and duration.
user is fixed. Again the users are asked to verify by giving (a) Receiver Operating Characteristics (ROC)
the user name and password. After the verification of user ROC analysis provides tools to select possibly optimal
name and password, the typing pattern is verified through models and to discard suboptimal ones independently from
the comparison of desired output with fixed threshold value. (and prior to specifying) the cost context or the class
If the error value is less then 0.001 then the user is distribution. ROC analysis is related in a direct and natural
considered as valid user otherwise invalid user. way to cost/benefit analysis of diagnostic decision making.
The success of this approach to identify computer users Fig 2 shows the ROC curves for comparison of mean and
can be defined mainly in terms of False Rejection Rate(FRR) standard deviation (Duration, Latency and Digraph) of
and False Acceptance Rate(FAR). False Rejection Rate of a classification performance. The error rate is reduced when
verification system gives an indication of how often an the sample size is increased.
authorized individual will not be properly recognized. False
Acceptance Rate of a verification system gives an indication
of how often an authorized individual will be mistakenly
recognized and accepted by the system. False Rejection Rate
is generally more indicative of the level of a mechanism.
FAR and FRR rate is calculated using the following
equations:

FAR = FA / N * No of user’s

where FAR – False Acceptance Rate, FR – Number of


incidence for False Acceptance and N – Total number of Figure 2. Classification Error Rate using Mean
samples

FRR = FR / N * No of user’s
6. Conclusion
To conclude, we have shown that keystroke dynamics are
where FRR – False Rejection Rate, FR – Number of
rich with individual mannerisms and traits and they can be
incidence for False Rejection and N – Total number of
14 (IJCNS) International Journal of Computer and Network Security,
Vol. 1, No. 2, November 2009

used to extract features that can be used to identify computer Ong, Han Foon Neo, “Statistical Fusion Approach on
user. We have demonstrated that using duration, latency and Keystroke Dynamics”, In Proceedings of the Third
digraph timings as classification features is a very successful International IEEE Conference on Signal-Image
approach. Features are extracted from 27 users with 100 Technologies and Internet-Based System, Shanghai,
samples of each. Using the samples, the mean and standard pp. 918-923, 2007.
deviation is calculated for duration, latency and digraph. [11] Shepherd S J, “Continuous Authentication by
Subsets of features are selected using Ant Colony Analysis of keystroke typing characteristics”,
Optimization (ACO) algorithm. ACO using digraph mean European Convention on Security and Detection’,
provide the best performance comparing with duration and pp.111-114, 1995.
latency. The features are classified and tested using [12] Christopher S. Leberknight, George R. Widmeyer,
Backpropagation Algorithm Finally, it was found that using and Michael L. Recce, ”An Investigation into the
the values of digraph and back-propagation neural network efficacy of Keystroke Analysis for Perimeter Defense
algorithms has shown excellent verification accuracy. The and Facility Access”, In Proceedings of the IEEE
classification error is reduced when the number of sample is Conference on Technologies for Homeland Security,
increased. The classification error of 0.059% and accuracy pp. 345-350, 2008.
of 92.8% is reported. [13] Gaines .R, Lisowski .W, Press .S, and Shpiro.N,
“Authentication by keystroke timing: Some
References preliminary results”, Rand Report R-256-NSF. Rand
Corporation, 1980.
[1] Hu. J, Gingrich. D and Sentosa. A, “A k-Nearest [14] Young .J.R and Hammon .R.W, “Method and
neighbor approach for user authentication through apparatus for verifying an individual’s identity”, US
biometric keystroke dynamics”, In Proceedings of the Patent 6862610, U.S. Patent and Trade- mark Office,
IEEE International Conference on Communications, 1989.
pp. 1556 – 1560, 2008. [15] Bleha .S.A and Obaidat .M.S, “Dimensionality
[2] Pavaday N and Soyjaudah. K.M.S, “Investigating reduction and feature-Extraction Applications in
performance of neural networks in authentication Identifying Computer users”, IEEE Transactions on
using keystroke dynamics”, In Proceedings of the Systems Man and Cybernetics, Vol. 21, No. 2, pp.
IEEE AFRICON Conference, pp. 1 – 8, 2007. 452-456, 1991.
[3] Adrian Kapczynski, Pawel Kasprowki and Piotr [16] Daw-Tung Lin, “Computer-Access Authentication
Kuzniacki, “Modern access control based on eye with Neural Network Based Keystroke Identity
movement analysis and keystroke dynamics”, In Verification”, In Proceedings of the International
Proceedings of the International Multiconference on Conference on Neural Networks, Vol. 1, Issue
Computer Science and Information Technology, pp. 9-12, pp.174 – 178, 1997.
477-483, 2006. [17] Sylvain Hocquet, Jean-Y Ves Ramel and Hubert
[4] Gláucya C. Boechat, Jeneffer C. Ferreira and Edson Cardot, “Fusion of methods for keystroke Dynamics
C. B. Carvalho Filho, “Authentication Personal”, In Authentication”, Fourth IEEE Workshop on
Proceedings of the International Conference on Automatic Identification Advanced Technologies, pp.
Intelligent and Advanced Systems pp. 254-256, 2007. 224 – 229, 2005.
[5] Anil Jain, Ling Hong, and Sharath Pankanti, [18] Enzhe Yu and Sungzoon Cho, “Keystroke dynamics
“Biometrics Identification”, Signal Processing, identity verification–its problems and practical
Communications of the ACM, Vol. 83, Issue 12, pp. solutions”, Computers & Security, Vol. 23, pp. 428–
2539-2557, 2003. 440, 2004.
[6] Duane Blackburn, Chris Miles, Brad wing, Kim [19] Yang .J and Honavar .V, “Feature subset selection
Shepard, ”Biometrics Overview”, National Science using a Genetic algorithm”, IEEE Intelligent Systems
and Technology Council Sub-Committee on and their Applications, Vol 13, Issue 2, pp.44-49,
Biometrics, 2007. 1998.
[7] Lin Hong and Anil Jain, “Integrating Faces and [20] John G. H., Kohavi .R and Pfleger .K, “Irrelevant
Fingerprints for Personal Identification”, IEEE features and the subset selection problem”, In
Transactions on Pattern Analysis and Machine Proceedings of the Eleventh International Conference
Intelligence, Vol. 20, No. 12, pp.1295 – 1307, 1998. on Machine Learning, pp. 121-129, 1994.
[8] Fabian Monrose, Aviel D. Rubin, “Keystroke [21] Shiv Subramaniam .K.N, S. Raj Bharath and S.
dynamics as a biometric for authentication”, Future Ravinder, “Improved Authentication Mechanism
Generation Computer Systems, Vol. 16, Issue 4, pp. using Keystroke Analysis”, In Proceedings of the
351-359, 2000. International Conference on Information and
[9] Gabriel. L. F. B. G. Azevedo, George D. C. Communication Technology, Vol. 7-.9, pp. 258-261,
Cavalcanti and E. C. B. Carvalho Filho, “Hybrid 2007.
Solution for the Feature Selection in Personal [22] Surendra K. Singhi and Huan Liu, “Feature Subset
Identification Problems through Keystroke Selection Bias for Classification Learning”, In
Dynamics”, In Proceedings of the International Joint Proceedings of the 23rd International Conference on
Conference on. Neural Networks, pp.1947-1952, Machine Learning, pp. 849-856, 2006.
2007. [23] Karnan M, Thangavel K, Sivakumar R and Geetha
[10] Pin Shen Teh, Andrew Beng Jin Teoh, Thian Song
(IJCNS) International Journal of Computer and Network Security, 15
Vol. 1, No. 2, November 2009

K, “Ant Colony Optimization for Feature selection Cybernetics, Vol.24, pp. 806-813, 1994.
and Classification of Microcalcifications in Digital [36] Rumelhart .D, Hinton .G and Williams .R, “Learning
Mammograms”, In Proceedings of the International Internal Representations by Error Propagation”,
Conference on Advanced Computing and Parallel distributed processing: explorations in
Communications, pp.298-303, 2006. the Microstructure of Cognition, MIT Press,
[24] Enzhe Yu and Sungzoon Cho, “GA-SVM Wrapper Vol.1, pp. 318–362, 1986.
Approach for Feature Subset Selection in Keystroke [37] Hecht Nielsen .R, “Neuro Computing,”, Springer-
Dynamics Identity Verification”, In Proceedings of Verlag New York, Inc. pp. 445-453, 1989.
the International Joint Conference on Neural [38] Kohonen .T, “The Neural Phonetic Typewriter”,
Networks, Vol. 3, pp. 2253-2257, 2003. IEEE Computer, Vol. 21, pp. 11-22, 1988.
[25] Ki-seok Sung and Sungzoon Cho, “GA SVM [39] Obaidat .M.S and Walk .J.V, “An Evaluation Study
Wrapper Ensemble for Keystroke Dynamics of Traditional and Neural Network Techniques for
Authentication”, In Proceedings of the International Image Processing Applications”, In Proceedings of
Conference on Biometrics, Hong Kong, the IEEE 34th Midwest Symposium on Circuits and
China, Vol. 3832, pp. 654-660, 2006. Systems, Vol.14, pp. 72-75, 1991.
[26] Gabriel L. F. B. G. Azevedo, George D. C. [40] Marcus Brown and Samuel J. Rogers, “A Practical
Cavalcanti and E.C.B. Carvalho Filho, “An Approach to User Authentication”, In Proceedings of
Approach to Feature Extraction for Keystroke the 10th Annual Computer Security Applications
Dynamics Systems based on PSO and Feature Conference, pp. 108-116, 1994.
Weighting”, In Proceedings of the IEEE Congress on [41] Sajjad Haider Ahmed Abbas K. Zaidi, “A Multi-
Evolutionary Computation, pp. 3577–3584, 2007. Technique Approach for User Identification through
[27] Fabian Monrose, Michael K. Reiter and Susanne Keystroke Dynamics”, IEEE Transactions on Systems
Wetzel, “Password Hardening Based on Keystroke Man, and Cybernetics, Vol.2, pp.1336–1341, 2000.
Dynamics”, In Proceedings of the 6th ACM [42] Brown M, Rogers S.J., “User identification via
conference on Computer and communications keystroke characteristics of typed names using neural
security, pp. 73-82, 1999. networks”, International Journal of Man-Machine
[28] Francesco Bergadana, Daniele Gunetti and Claudia Studies, Vol. 39, pp. 999-1014, 1993.
Picardi, “User authentication through Keystroke [43] Nadler M and Smith E P, “Pattern Recognition
Dynamics”, ACM Transaction of Information and Engineering”, New York: Wiley-Inter Science, 1993.
System Security, Vol. 5, pp. 367-397, 2002.
[29] Magalhaes, Paulo Sergio and Henrique Dinis dos, Authors Profile
“An improved Statistical Keystroke Dynamics Marcus Karnan received the BE Degree
Algorithm”, In Proceedings of the IADIS Virtual in Electrical and Electronics Engineering
Multi Conference on Computer Science and
from Government College of Technology,
Information Systems, pp. 256-262, 2005.
Bharathiar University, India. Received the
[30] David Martens, Manu De Backer, Raf Haesen, Jan
ME Degree in Computer Science and
Vanthienen, Monique Snoeck, and Bart Baesens,
Engineering from Government College of
“Classification with Ant Colony Optimization”, In
Engineering, Manonmaniam Sundaranar
Proceedings of the IEEE Transactions on
Evolutionary Computation, Vol. 11, pp. 651-665, University in 2000. Received the PhD degree in Computer
2007. Science and Engineering Degree from Gandhigram Rural
[31] Haibin Duan and Xiufen Yu, “Hybrid Ant Colony University, India in 2007, Currently he is working as Professor,
Optimization Using Memetic Algorithm for Department of Computer Science & Engineering Department,
Traveling Salesman Problem”, In Proceedings of the Tamilnadu College of Engineering, India. He has been in
IEEE International Symposium Approximate teaching since 1998 and has more than eleven years in industrial
Dynamic Programming and Reinforcement Learning and research experience. His area of interests includes medical
(ADPRL) pp. 92-95, 2007. image processing, artificial intelligence, neural network, genetic
[32] Dorigo .M and Gambardella .L.M., “Ant colonies for algorithm, pattern recognition and fuzzy logic.
the traveling salesman problem”, Bio Systems, 1997.
[33] Dorigo .M, Maniezzo .V and Colorni .A, ”Positive M. Akila received the Bachelor of
feed back as a search strategy”, Technical Report Computer Science and Engineering from
Politecnico di Milano, Italy, 1991. Thiagaraja College of Engineeering,
[34] Youmei Li and Zongben Xu, “An Ant Colony Madurai Kamaraj University in 1991. She
Received the Master of Computer Science
Optimization Heuristic for solving Maximum and Engineering from National
Independent Set Problems”, In Proceedings of the Engineering College, Manonmaniam
Fifth International Conference on Computational Sundaranar University in 2003. She is
Intelligence and Multimedia Applications, pp.206– now a Research scholar in Anna University, Coimbatore and
211, 2003. working as Assistant Professor in Vivekanandha College of
[35] Obaidat M. S., and Macchairolo D. T., “A Multilayer Engineering for Women, Tiruchengode, Tamilnadu, India. Her
Neural Network System for Computer Access area of interests includes image processing, pattern recognition
Security”, IEEE Transactions on Systems, Man and and artificial intelligence.
16 (IJCNS) International Journal of Computer and Network Security,
Vol. 1, No. 2, November 2009

A Rough Set Model for Sequential Pattern Mining


with Constraints
Jigyasa Bisaria1, Namita Srivastava2 and Kamal Raj Pardasani3
1,2,3
Department of Mathematics,
Maulana Azad National Institute of Technology (A Deemed University), Bhopal M.P.

jigyasab@gmail.com, sri.namita@gmail.com, kamalrajp@hotmail.com

Abstract: data mining and knowledge discovery methods sequential patterns is about finding all those patterns which
host many decision support and engineering application needs satisfy . Under classical framework constraints can be
of various organisations. Most real world data has time classified as monotonic, anti-monotonic and succint [14]. A
component inherent in them. Sequential patterns are inter-event constraint is anti-monotonic if its agreement for any
patterns ordered in time associated with various objects under sequence α implies its satisfaction by all its subsequences. A
study. Analysis and discovery of frequent sequential patterns in constraint is monotonic if a sequence α satisfies
user defined constraints are interesting datamining results.
implies that every super-sequence of α also satisfies .
These patterns can serve a variety of enterprise applications
concerning analytic and decision support needs. Impostion of Succinct type of constraints is pre-counting pushable
various constraints further enhances the quality of mining constraints such that for any sequence α the satifaction of
results and retrict the results to only relevent patterns. In this the constraint implies its satisfaction by all the elements of
paper, we have proposed and rough set perspective to the sequence α. A succinct constraint is specified using a
problem ofconstraint driven mining of sequential pattern. We precise “formula”. According to the “formula”, one can
have used indiscernibility relation from theory of rough sets to generate all the patterns satisfying a succinct constraint.
partition the search space of sequential patterns and have There is no need to iteratively check the constraint in the
proposed a novel algorithm that allows pre-visualization of mining process.
patterns and imposition of various types of constraints in the Early work in the domain of constriant imposition into
mining task. The algorithm C-Rough Set Partitioning is atleast
sequential pattern mining task is the algorithm GSP [3].
ten times faster than the naïve algorithm SPRINT that is based
on various types of regular expression constriants.
They proposed the concept of time interval constraint,
maximum gap and minimum gap constraint and build them
Keywords: Rough sets, Sequential patterns, constriants, into apriori algorithm framework. Another work in the
indiscernibility, partitioning framework in time interval constraints is given by Mannilla
et.al [2]. They defined “an episode as a collection of events
that occur relatively close to each other in a given partial
1. Introduction
order.” They did consider the importance of time frame of
Sequential pattern mining is studied extensively in data patterns and gave the concept of event window and sliding
mining literature due to its applicability into a variety of event window. They defined patterns as directed acyclic
applications. It is applied to a lot of real world decision graphs with vertex as a single event and edge as “Event A
support applications like root causes of banking customer occurs before event B”. Their method of finding frequent
churn [8], analysis of web logs [9], fault diagnosis and episodes is “bottom-up candidate-generate and test
prediction in telecom networks [10], study of adverse drug apporach” which is similar to Apriori ALL proposed by
reactions as temporal association rules[11]. The enormous Agrawal and Srikant [1].
search space and huge number of patterns are inherent F Masseglia et al.[15] have also proposed the time
challenges in the sequence mining task. Conventional constraint imposition into mining of sequential patterns.
studies into sequential pattern mining give various They have presented a graph theoretic mining algorithm to
computational methodologies to enumerate the frequent deduce the search space of time constraint sequential
sequence space [1]-[6]. These methods mine all sequential pattern.
patterns in the support confidence framework. Garofalakis et al. [16] have given the framework for
Computational methodologies in [1]-[5] are botton up imposing regular expression cosntraint into sequential
candidate generate and test approaches. The method pattern mining. A regular expression R is a set of
PrefixSpan [6] works on the concept of iteratively projecting expressions such as disjunction and Kleene closure [17]. R
the database on the basis of the prefix. This method does not specifies a language of strings over a regular family of
generate any candidate and is strictly based on the events sequential patterns that are of interest to the user. They
present in the database. confirmed that Regular expression constraints have the
New generation mining methods require the retrieval of same expressive power as diterministic finite automata [17].
patterns in user defined constraints. Impostion of constraints The algorithms SPRINT is a multi database scan candidate
not only condense the mining results to the most useful ones generate and test strategy based on GSP [3]. The candidate
but also reduce the search space and improve performance. generate strategy works on imposing a relaxed constraint
A constraint can be regarded as a Boolean function on
all sequences. The problem of constraint based mining of
(IJCNS) International Journal of Computer and Network Security, 17
Vol. 1, No. 2, November 2009

The method first genrates candidates and checks for validity address most decision centric constraint imposition tasks. In
patterns that statisfy the given the regular expression this paper, we explain all the seven types of constraint their
constraint and then finds occurance frequency for such treatment in the rough set based framework. Here we retrict
length-1 sequences that cross the minimum support our discussion to length-1 sequences. This correspond to
threshold. This becomes the seed set for further iteration. many real world sequential patterns for example sequential
The Candidate Length-2 sequences are formed by joining pattern of web access patterns, faults in telecom landline
the elements of the seed set. Now, the database is scanned networks etc. (i) We have proposed a user friendly interface
again for searching these candidates and their counts are that generates previsualization of a sample of emerging
accumulated after checking the relaxed constriant . In sequential patterns and allows flexible imposition of time,
subsequent iterations, candidate k-length sequences are length, gap constraint prior to mining task and (ii) we have
formed by joining frequent k-1 sequences that have the same presented a novel algorithm based on indiscerniblity relation
contiguous subsequences. Suppose a sequence from theory of rough sets to address the computational
Sα= e1 , e 2 ,......e n , another sequence sβ is a contigeous aspect of the expensive mining problem of frequent
sequential patterns satisfying item, super pattern, regular
subsequence of Sα if (i) sβ is derived from Sα (ii) sβ is expression contraints. It is found from experimental
derived from Sα by dropping an item from an element ej that evaluations that our algorithm is atleast 10 times faster than
has at least 2 items. (iii) sβ is a contiguous subsequence of algorithm SPRINT [16].
sδ and sδ is a contiguous subsequence of Sα
The process is continued untill all frequent sequences 2. Problem Formulation
present in the database are found satifying the relaxed
constriant From theory of rough sets, an information system is given
Given an anti-monotonic constriant, the constraint is first as: S = {U, A t , V, f} where U is a finite set of objects,
imposed and candidates which do not satisfy the constraint U = {x1 , x 2 ,.............x n } At is a finite set of attributes, At is
are pruned. It is clear that like the support constraint, the further classified into two disjoint subsets, conditional
constraint is also anti-monotonic, that is if the constriant attributes C and decision attributes D, A t = C ∪ D
is not supported by a sub-pattern it will not be supported by
its super pattern also. V= UV p
and Vp is a domain of attribute p
p∈ At
In case the constraint in monotone an appropriate
choice of relaxed constriant is used for generate of valid f : U × A t → V is a total function such that f (x i , q) ∈ Vq for
results. The family of SPRINT methods suffer from the every q ∈ A t and x i ∈ U . Consider an example transaction
drawback of huge query overhead due to multiple scans, database as in TABLE I.
weak constriant imposition based candidate generation
followed by frequent pattern discovery from amongst the Table 1: Example transaction database
candidate set.
Han et al. [17] have confirmed the imposition of various
user defined constraints for efficient mining of patterns.
They have proposed architecture for mining
multidimensional association rules in the framework of
online analytical mining. They proposed constraint
imposition at the level of transaction database with the use
of PLSQL query language which is further subject to
multidimenisional association pattern discovery.
Pei et al. [14] have studied the process of constriant
imposition in the framework of prefixspan [6]. They have
presented the constraint imposition framework in both
classical and application centric framework. Their work
presents a detailed study on how conventional monotone,
anti-monotone and succinct constraints can be studied as a
prefix constraint while recursively projecting the database
with the same. Their study confirmed that while the method
prefixspan is efficient for sequential pattern mining it is not
suitable for constraint driven mining. They have presented a
systematic study of regular expression and aggregate
constraints imposition and presented various application
oriented examples for tough but interesting constraints.
They defined seven categories of constraints from the A t = (T, I) where T is the set of transaction times and I is
application perspective; item, super pattern, time intrerval, the set of associated itemsets with x i . Examples of
gap between subsequent transactions, regular expression transaction database can be database of customer purchase
constraint, length of sequence and various aggregate patterns in a retail store, web access details etc. There are
constraints. Though these are not the complete set of multiple instances of the same customers ( x i ) in the
possible constraints but are more or less comprehensive to
information system U. Alternate representation of the
18 (IJCNS) International Journal of Computer and Network Security,
Vol. 1, No. 2, November 2009

transaction database is termed as a sequence database For example consider the example of web browsing patterns
formed by grouping transactions corresponding to same of customers, a pattern of type 3 can be web access pattern
( x i ).The alternate information system is S' = (U, E) where which encapsulates the subsequence (online advertisement,
U = {x1 , x 2 ,.......x n } E = {e1 , e 2 , e3 ,........em } a sequence or product site). Super pattern constraint is monotone and
succint.
serial episode is defined as a set of events that occur within
Constriant type 3: (time interval constraint) a transaction
a predetermined event interval and are associated with the
database has time stamp information against event labels.
object under study. Given I be the set of itemsets
The time interval or duration constraint are a set of
I = {i1 ,i 2 , i3 ..............i n } then the set of sequence E ⊂ A t is
sequences with the property that the time interval between
formed by combining itemsets associated with the same first and last transaction is less than or greater than a
object ordered by time. ∀ei ∈ E ei = {i1 , i 2 ,.......i l } The specific value.
length of a sequence is the number of items it contains. A k- (5)
sequence contains k items k = ∑ e j . The absolute support Where and is a given integer. The length of
j the sequential pattern depends on the choice of the time
of a sequence ei is defined as the number of transaction that interval under study. Let in T ⊂ A t , t s be the start Time
contain it and relative support is defined as sup (ei) = and t e be the end time for study of transaction patterns.
absolute support/no. of objects in the dataset. A pattern is Then, the event/time interval for study of patterns is given
frequent if it crosses a user specified frequency threshold by: t s − t e for given information system S. If we group the
called minimum support threshold [15]
Given sequences, , represents a disjuction transaction information I ⊂ At corresponding to the
operator which indicates the selection of either of the event same x i , we derive and alternate representation of the
patterns. Here, is the ith element of the sequence. is a information system S. If we impose time interval retriction
regular expression constraint. represent the Kleene we derive sequence database in constraint time interval. The
closure operatorwhich signifies zero or more occurances of maximum length can be controlled by the appropriate
element . consideration of time interval constraint. Consider the
The problem of constraint driven mining of sequential transaction database in TABLE I If the time interval under
patterns is concerned with discovery of frequent patterns consideration is 20 days then the sequence database is as
that also satisfy user specified contriants. Commonly given in TABLE II and if the time interval under
imposed constriant can be classified in the following consideration is 25 days then the derived sequence database
categories. is given by TABLE III. Both length and time interval
Contraint type 1: (Item constraint) An item constraint constriants are anti-monotone under operation and they
specifies subset of items that should or should not be present are monotone and succint under the operation.
in the patterns. Considering the case of n size length-1 Constraint type 4: (Length Constraint) In case of length-1
sequential patterns V also corresponds to subsequence sequences this type of constraint restricts the size of the
relation. sequence under consideration. It can be the restriction of the
(1) maximal pattern length.
(6)
Where V is the subset of items, Consider the example in TABLE I,II,III the maximum
length of sequential pattern in TABLE II is 5 while in case
If then the item constraint is both anti-monotone and of TABLE III it is 3.
succint under operation. Constraint type 5: An aggregate constraint is the
If then the item constraint is both amonotone and constraint on an aggregate of items in a pattern, where the
succint under operation. aggregate function can be sum, avg, max, min, standard
Example of type 1 constraint is discovery of specific web deviation, etc.
usage pattern of customer characterized by one type of sites For example in case of data for market basket analysis the
for example online gift stores. Another example in case of retails store customer might be intrested in knowing those
fault diagnosis in telecom landline networks; a constraint of items which the sum of bill was more than 2000 Rs. Some
type 1 can be characterized by all sequential patterns in aggregate function like sum, average on both positive and
which the fault signal “dead phone” is present or absent. If negetive values are neither monotone, anti-monotone or
T is the set of gift stores on the web then, succinct.
(2) Constraint type 6: (Regular Expression Constraint) the
Given the domain all uniques sequential patterns; all regular expression constraints are specified as a regular
transactions that follow the type 1 constraint are the expression over the set of items using regular expression
members of the indiscernibility relation formed by the operators like disjunction or Kleene closure. A sequential
equivalence class of patterns indiscernible with respect to pattern satisfies a regular expression constraint if and only if
the concept of pattern existance. the pattern is accepted by equivalent finite automata. Like
(3) aggregate constraints regular expression constraints are also
Constraint type 2: (super pattern constraint) a super neither monotone or anti-monotone nor succinct.
pattern constraint finds those patterns which encapsulate a Constraint type 7: (gap constraint in adjescent
user specified sequence. transactions) in many transaction events have to be
(4) equispaced in time that is the time gap between subsequent
(IJCNS) International Journal of Computer and Network Security, 19
Vol. 1, No. 2, November 2009

transactions have to be either greater or smaller than a


prespecified gap.
(7)
Where and is a given integer. The gap
constraint has the anti-monotone property.
We categorize constriants in two categories one which
influence sequence database like length of pattern, time
interval and gap between subsequent patterns named as
CAT1 and other are that category of constraints which mine
specific patterns in the sequence database under study.
Examples of second category of constraints are regular Figure 2. Patterns with constriant
expression, item, super patterns and aggregate constraints
named as CAT2. Figure 3 and Figure 4 give PLSQL code snnipets for finding
appropriate sequence database in above mentioned
3. Proposed Model and Method constraints.

The proposed algorithm C-RSP is a break and search Π Top k LocationId from Table1 where transaction_date>=Tstart &
strategy. C-RSP proposes a complete mining system that transaction_date<=Tend
//--Π is project operator of relational algebra which implies Select Distinct k is
allows imposition of all types of constriant. The input to the the number of records the user //wishes to visualize
problem of mining sequential pattern in user defined FOR each customer id in the rec_inner_test
constriant is the transaction database of objects under study. LOOP
A sample database is given in TABLE I. It is evident that return_str:='';
FOR I IN 1..rec_inner_test.COUNT
resultant sequence database is governed by user’s choice of LOOP
time interval and maximum length constriant. return_sequence:=return_str||rec_inner_test(i).signal||':'
The algorithm first presents a user interface that allows END LOOP;
Update the Sequence_table with Sequence against each LocationID
flexible and adjustible impostion of CAT1 types of
ENDLOOP
constraints.
Figure 3. Algorithm Pseudocode to derive sequences from
Once the user derives the relevent sequence database under
transaction database in user specified time interval
study by impostion of CAT1 categories of constriants; the
sequence database is now the input to the mining of patterns
in CAT2 categories of constriants. Π Top k LocationId from Table1 where Lengthofsequence<=n
This is a done by presenting a user interface that gives a //--Π is project operator of relational algebra which implies Select Distinct k
is the number of records the user //wishes to visualize
view FOR each customer id in the rec_inner_test
of the sequence database on choosing an appropriate time LOOP
interval. Figure 1 gives user interface that allows pre- return_str:='';
visualization of sequences formed by transactions FOR I IN 1..rec_inner_test.COUNT
LOOP
indiscernible with maximal time interval of patterns. Figure return_sequence:=return_str||rec_inner_test(i).signal||':'
2 gives the user interface for previsualization of the END LOOP;
maximum length of patterns as a result of user’s choice of Update the Sequence_table with Sequence against each LocationID
ENDLOOP
time interval.

Figure 4. Algorithm Pseudocode to derive sequences from


transaction database in user specified maximum length

Suppose the sequence database is as in TABLE II. Now the


task is to enumerate frequent sequence space in the user
defined constraints of category CAT2. The method C-Rough
Set Partitioning is a divide and conquer strategy.
We scan the database once and store all the data in the
attribute set of events into two datastructures. One is the
domain of set E containing all unique sequences and itemset
in S.
Step 1: Now to find frequent items, we query all unique
Figure 1. Patterns with constriant
itemsets and store them in a set Î . We partition the set V in
a way that all sequence and subsequence with the same
prefix are stored in one equivalence class. Thus each
element in Î has an corresponding equivalence class
partition in V. Considering the sequence database in
TABLE II, the partitions in V are given in Figure 5.0.
20 (IJCNS) International Journal of Computer and Network Security,
Vol. 1, No. 2, November 2009

(7)

Step 2: We maintain an array of frequencies which is of the


size of the set V. The following steps explain the support
counting process in the indiscernibility mapping:
Step 2.1: For all tuples in sequence database S,
Step 2.2: Deduce subsequences, check if the subsequence is
a superset of pattern β.
Step 2.3: Each element subsequence found accounts for an
increment in the element frequency at appropriate index in
partition and one increment to its subset, the process
continues till all elements of S are considered. The process
of mapping item constriant is the same as that of super
pattern constraint.
Case 2: Suppose we desire to impose the regular expression
constriant characterized by a disjunction operator.
(8)
This can be imposed by retricting the indiscernibility
mapping to patterns
(9)
Case 3: Consider an example of fault pattern mining in
Figure 5. Partitions in V on basis of prefix indiscernibility telecom landline access networks. Often the user wants to
mine support of pattern within pre-specified time interval
Lemma 1: All equivalence classes formed by patterns with with specific items of intrest embedded in the sequence.
the same prefix form a partition in the database under study. This types of dirty constriants cannot be handled by
Proof: An equivalence class [12] is formed by elements PrefixSpan based methods or even the class of SPRINT
which can be treated as equivalent in some way. An based methods reder inadequate for handling such
equivalence relation on a set forms a natural partitioning combination of constriants. With C-RSP such constraints
into groups of like objects. From the theory of rough sets can be easily build into sequence mining task. In above
[13] given a knowledge base U, a concept is a relation which example, the time constraint can be imposed at the level of
forms a partition of a certain universe in families. If transaction database and pattern existance and support
C = {y1 , y 2 ,............yn } such that the following conditions counting is build onto the sequence database.

are satisfied yi ⊆ U :
4. Results and discussion
Condition 1: yi ≠ φ
We have compared the effiiciency of C-RSP with SRINT(N)
yi ∩ y j = φ
Condition 2: naïve. It was found to be more than 10 times faster than
Uy i = U ' for i ≠ j i,j=1,2,.....n SPRINT. Figure [6][7][8] give runtime comparison of C-
Condition 3: RSP with SPRINT by imposition of time interval and length
V = U Vs constriant represtively. Figure [6] give comparitive
Given the domain S
of all sequence present in the efficiency on impotion of time constriant on real data of
database under study, V can be partitioned on the basis of network fault patterns in telecom landline networks of
equivalence classes yi such that each yi contains patterns Madhya Pradesh in India. The time period of data was
with the same prefix. Clearly condition 1 is satisfied since considered by the knowledge worker as three months. The
each element of V will be a member of some yi . Condition algorithm C-RSP is implemented in JDK1.3. The
2 is satisfied since no two elements in V with the same preprocessing step is a java program which connects to
prefix will be different equivalence classes. Since all database as in TABLE 1 and invokes a PLSQL cursor which
members of V with different prefix are in some equivalence creates TABLE II. The entire process is undertaken using
class union of all equivalence class should result in V. java database connectivity interface. It connects to the
U yi = V for i ≠ j i,j=1,2,.....n database in MSSQL Server 2005 as in TABLE II and
Now the database is in good form for impostion of various fetches the data into data structures using jdbc. The machine
constriants of CAT2, item constraint, super pattern used is HP Proliant DL580G5 with Intel Xeon CPU 1.6
constriant, regular expression constraint and other complex GHZ processor with 8 GB RAM. The operating system is
constriants. Ms Windows Server 2003 R2. The data comprised of 75833
Case 1: Suppose the user want to find all frequent sequences records with voice related gross faults collected over a time
that have pattern in them, the algorithm finds patterns window of three months. There are 215 distinct elements in
which are indiscernible on the basis of pattern existance. the sequence and maximum length of the sequence is 14.
The algoithm SPRINT is also programmed onthr same
machine using jdk1.3. The time contraint imposition is done
(IJCNS) International Journal of Computer and Network Security, 21
Vol. 1, No. 2, November 2009

at the level of generating candidates. Only those candidates the same recursively. There is no candidate generation since
are considered in the support counting process in subsequent we are only fetching data into data structures and applying
scan of data which satisfy the specified time constraints. computation logic on the same. The method C-RSP requires
only one to two scans of the database while SPRINT
recursively scans the databases and works on candidate
generate test strategy. The constriant impostion strategies
allow impostions of individual and composite constraints.

5. Conclusion
The following are the benefits of proposed model:

(i) Since support counting is usually the most costly


step in sequential pattern mining, proposed
technique improves the performance greatly by
avoiding costly scanning. Also the algorithm is
strictly based on elements that exist in the database
inder study. The partitions once constructed and
stored can be used to mine further data increments
Figure 6. Runtime evaluation on real data of network faults in the database.
on imposition of time constriant (ii) The creation of equivalence classes by
indiscernibility relation greatly reduces the search
Other experiments on efficiency are performed on data space. Especially with impostion of CAT2
similar to data generated by synthetic data generation constraints, the search space is restristed to specific
program at http://www.almaden.ibm.com/cs/quest. The eqivalence class.
following are the descriptions of the parameters of the (iii) The dynamic frequency accumulation sceme in
dataset. each partition saves computaiton time.
|D| size of the database (number of customers) (iv) While other methods search the whole search
|C| Average number of transactions per customer space, our method partitions the problem into
|I| Average size of itemset in maximal potentially large subproblems.
sequence (v) The categorization of constriants enables flexible
|N| Number of items and adjustable constraint imposition scheme on
various data representations.
Here we have imposed the constriant on the maximum (vi) Based on experimental results obtained and
length of the pattern. The maximum length of the pattern is depicted in graphs, we conclude that C-RSP is
retricted to 14 in the dataset under consideration. atleast 10 times faster than SPRINT.

References
[1] R. Agrawal and R. Srikant, “Mining Sequential
Patterns", In Proceeding of International Conference in
Data Engineering pp:3-14, 1995.
[2] Manilla, H. Toivonen H. and Verkamo A. I.
“Discovering frequent episodes in sequences.” In
proceeding of International Conference on Knowledge
Discovery and Data Mining, IEEE Computer Society
Press 1995 pp:210-125, 1995 .
[3] R. Srikant and R. Agrawal, “Mining sequential patterns:
Generalizations and performance improvements.” In
Proc. 5th Int. Conf. Extending Database Technology
(EDBT’96), pp: 3-17, Avignon, France, March 1996.
[4] Jay Ayres, Johannes Gehrke, Tomi Yiu,& Jason
Figure 7. Runtime evaluations of synthetic data on Flannick, “Sequential Pattern Mining using A Bitmap
impostion of length constraint Representation”, In Proc. 2002 of the eighth ACM
SIGKDD international conference on Knowledge
It is clear from the above graphs that C-RSP outperforms the discovery and data mining Edmonton, Alberta Canada
SPRINT family of methods by an order of magnitude. This pp: 429 – 435, 2002.
is due to partitions of search space and impostion of
constriant at the preprocessing level and avoiding validity of
22 (IJCNS) International Journal of Computer and Network Security,
Vol. 1, No. 2, November 2009

[5] Zaki M. J. “SPADE-An efficient algorithm for mining Authors Profile


frequent sequences.” Machine Learning 42,1/2, pp:31-
60, 2001. Jigyasa Bisaria is a faculty and research fellow with the
[6] Jian pei, Jiawei Han, Behzad Mortazavi-Asl, Jianyong Department of Mathematics Maulana Azad National Institute of
Wang, Qiming Chen, Umeshwar Dayal, Mei-Chun Hsu, Technology.Bhopal India. Her research interests are predictive
“Mining Sequential Patterns-Growth: The PrefixSpan data mining and its applications to real world problems.
Approach”, IEEE Transactions on Data and knowledge
engineering vol 16,no.11,pp:1424-1440, 2004. Dr. Namita Srivastava is working as Assistant Professer with the
[7] Yen Liang Chen ,Mei Ching Chiang, Ming- Tat Ko, Department of Mathematics, Maulana Azad National Institute of
Technology. She obtained her PhD. in Mathemetics in 1992 in
“Discovering time-interval sequential patterns in
crack problem. Her current research interest are data mining and
sequence databases” Expert systems with applications its applications.
25, pp:343-354, 2003.
[8] Ding-An Chiang, Yi-Fan Wang, Shao-Lun Lee,Cheng- Dr. Kamal raj Pardasani is working as Professor and Head with
Jung Lin, “Goal-oriented sequential pattern for network the Department of Mathematics and Dean Research and
banking churn analysis”, Expert Systems with Development Maulana Azad National Institute of Technology,
Applications (25), pp:293–302, 2003. Bhopal. He did his PhD. in applied Mathematics in 1988.
[9] Sasisekharan, R., Seshadri, V., Weiss, S. “Data mining His current research interests are computational biology, data
and forecasting in large-scale telecommunication warehousing and mining, bio-computing and finite element
modeling.
networks.” IEEE Expert 11 (1), pp:37-43, 1995.
[10] J. Pei, J. Han, B. Mortazavi-Asl, and H. Zhu, “Mining
Access Patterns Efficiently from Web Logs.” In Proc.
Pacific-Asia Conf. on Knowledge Discovery and Data
Mining (PAKDD'00), Kyoto, Japan pp: 396-407, 2000.
[11] Huidong Jin, Jie Chen, Hongxing He, Graham J.
Williams, Chris Kelman and Christine M. O’Keefe,
“Mining Unexpected Temporal Associations:
Applications in Detecting Adverse Drug Reactions”,
IEEE Transactions on Information Technology in
Biomedicine, Volume 12, Issue 4, pp:488–500 July
2008.
[12] Jigyasa Bisaria, namita srivastava, K. R. Pardasani, A
Rough Sets Partitioning Model for Mining Sequential
Patterns with Time Constraint, International Journal of
Computer science and information security Vol 2. No 1.
pp: 178-189 June 2009.
[13] Z.Pawlak, “Rough Sets, Theoretical Aspects of
Reasoning about data” Springer, 1991.
[14] Jian Pei, Jiawei Han, WeiWang, “Constraint-based
sequential pattern mining: the pattern-growth
methods” Journal of Intelligent Information
Systems 28. pp:133–160, 2007.
[15] F Masseglia, P Poncelet, M Teisseire, “Efficient
mining of sequential patterns with time constraiant:
reducing the combinations”, Expert systems with
applications Elsevier, Vol. 40, N. 3, 29 pp : 2677-
2690, 2008.
[16] Minos N. Garofalakis, Rajeev Rastogi,Kyuseok
Shim, SPRINT: Sequential PatternMining with
Regular Expression Constraints, Proceedings of the
25th VLDB Conference, Edinburgh, Scotland 1999
[17] Laksmanan, Han, Raymond T, Constraint based
multidimensional mining SIGKDD 2006.
[18] H. R. Lewis and C. Papadimitriou. “Elements of the
Theory of Computation”. Prentice Hall, Inc.,
1981.
(IJCNS) International Journal of Computer and Network Security, 23
Vol. 1, No. 2, November 2009

Hybrid Content Location Failure Tolerant


Protocol for Wireless Ad Hoc Networks
Maher BEN JEMAA
Research unit on Development and Control of Distributed Applications “ReDCAD”

Department of Computer Science and Applied Mathematics, National School of Engineers of Sfax, Tunisia

maher.benjemaa@enis.rnu.tn

Abstract: The current network evolution has permitted the and dynamic deployment of services to provide the
emergence of new providers of services which offer services of flexibility of communication to the user [1].
different types and qualities including not only simple services
like printers but also complex services like real time encoding.
However, to benefit from a network service, the user needs to be
provided with the address of this service. With this evolution
perspective, the static configuration will not be practical
anymore in the case of a high number of diversified services.
Service location protocols try to overcome this drawback by
providing the user with an efficient and flexible access to
services. The location changes and the exhibition of services
have a particular importance in mobile environments like ad
hoc networks. Indeed, ad hoc networks are wireless self-
organizing networks of mobile nodes and require non fixed
Figure 1. Example of an ad hoc network routing
infrastructure. This paper has for purpose the implementation
of a new protocol of location of services HCLFTP (Hybrid
Content location Failure Tolerant Protocol) for ad hoc networks
within the “Network Simulator” environment.

Keywords: services advertisement, service location, mobile ad


hoc networks, hash table.

1. Introduction

For the deployment of an ad hoc network without


infrastructure, we can consider a distributed solution to Figure 2. A large scale ad hoc network
enable users to extend their communications beyond the
scope of their radio interface. Each user can relay messages Mobile ad hoc networks are self organized networks, with
to ensure that all users can join, regardless of distance, no central control entity and having a dynamic topology
provided there are enough users on the path. The network is governed by the connection and disconnection of the nodes.
self-provided and supported by the collaboration of all This evolution of networks towards dynamic architectures
participants. Figure 1 shows an example. Here, the and non-centralized, and the development of new types of
computer A wants to communicate with the computer C. As services in addition to data exchange have led to several
they are not in direct communication range, A will send its problems concerning the detection of contents and services
message to B (a phone) which will in turn transmit the same (see Figure 3). Furthermore, in order to access a service, the
message to C. user must at least know the network address of the host that
While it is easy to understand the mechanism of routers provides this service [5].
when there are few objects, how to do when they are With wired communication, the solution is simple to find.
proliferating? How to do, for example in the case presented Just have servers that identify the services available. As
in Figure 2 to identify the router to be used in order to these servers are still available, just ask them to get the
transmit a message from one point to another network? address of the host providing the service desired. To provide
Moreover, the changing needs of mobile users have led to a service, it is also easy to publish with one of these servers
the emergence of new challenges. Despite the ad hoc to make available to the rest of the network [10]. We show
networks constraints, the evolution of services no longer in Figure 4 the principle of location service with a central
allows to have static configurations of services to mobile server.
devices. Hence, it is needed to design a protocol for locating In ad hoc networks, such centralization of data is
inadequate. Indeed, the servers that identify the services
may be inaccessible because of mobility. If we consider that
24 (IJCNS) International Journal of Computer and Network Security,
Vol. 1, No. 2, November 2009

the network is only composed of mobile entities, so small - Limited physical security: The mobile ad hoc networks
size, it is unlikely that there is in the network a node with are more affected by the security setting, than the
physical capabilities in terms of memory, energy and conventional tethered networks. Indeed, the physical
bandwidth for allow it to store all the services available and constraints and limitations require reducing the control of
meet the requirements of each other network members. It is the transferred data.
therefore suitable to propose methods to distribute - Lack of infrastructure: The ad hoc networks differ from
information in the network [2]. other mobile networks by the lack feature of existing
infrastructure and any kind of centralized administration.
The mobile hosts are responsible for establishing and
maintaining the network connectivity on an ongoing basis.
In an ad hoc network, meet the requirements and demands
of applications or users of many services and contents arises
many challenges, brought about by the distributed aspect
and use of wireless communication interfaces. Get in a
dynamic and decentralized environment, a working
Figure 3. Discovery of a printer
environment with a quality equivalent to that provided by a
wired network is very difficult but not impossible [11]. We
refer in this paper the basic principles of HCLFTP (Hybrid
Content Location Failure Tolerant Protocol) to meet the
following objectives: (i) ensure fault tolerance and improve
the load distribution, (ii ) solving the problem of locating
and routing of data is improved by using a system based on
hash functions for nodes and data, (iii) the data structures
required are small in size and can therefore ensure the
location of a data fast enough, (iv) and finally the system
implements both technical replication to ensure data
persistence as well as mechanisms of caches to improve data
Figure 4. Central search availability [8]. This paper is organized as follows. In
section 2, we present the main methods of locating content.
Once the service has been found, it will be used. A new Section 3 describes the protocol HCLFTP dedicated to
problem arises when we transit from wired networks to ad service discovery in ad hoc networks. In section 4, we
hoc networks. In wired communication, once the connection present the simulation results of this approach in the
is established, it remained valid throughout the phase of simulation environment "Network simulator” (NS). Finally,
service use. If this is not the case, the network is considered we summarize our contributions and future prospects of our
down. For against, in ad hoc networks, this reliability research work.
situation is no more verified. Indeed, it is not uncommon
that the mobility of nodes leads to the separation of the 2. A survey of content localization protocols
network into several disjoint components. While most
routing protocols propose to change the paths when they The previous solutions of locating nodes and routing data
become disabled, once two nodes cannot physically join, it is are not applicable to a large scale. Indeed, changes in such
sometimes too late to try anything. It then becomes systems are numerous and fast: a node may be present in a
interesting to analyze the state of the network to try to system for a period of ten minutes and then disappears. New
predict when the nodes will be physically disconnected. If solutions are required and necessary to develop new
the event is planned well in advance, it is not later to mechanisms for tracking and routing [4]. Systems-based
respond. This reaction may be of different nature, research content can be classified based on the different techniques of
to duplicate the service, look for another node providing an localization and routing of data.
equivalent service or strengthen the connection. Mobile ad
hoc networks are characterized by the following [3].
- Dynamic topology: The mobile units of the network,
moving free and arbitrary. Hence the network topology may
change at unpredictable moments, with a fast and random.
The links of the topology can be unidirectional or
bidirectional.
- Limited bandwidth: One of the primary features of
wireless communication networks is the use of a shared
communication medium. Such sharing is that the bandwidth
reserved for a host is low.
- Energy constraints: The mobile hosts are powered by
independent power sources such as batteries. The energy Figure 5. Search Content
parameter must be taken into account in any control by the
system.
(IJCNS) International Journal of Computer and Network Security, 25
Vol. 1, No. 2, November 2009

2.1 Location with centralized directory


In a centralized architecture, a server is limited to direct all The operation of this model is similar to the centralization
users. Note that there appears hierarchies since several model. The user sends a query on the server. The search of
computers do not have the same role than the others. Each nodes is through all the super-nodes containing all the
time a user submits a query, the central server creates a list users’ data. This solution requires an identification of each
of resources matching the query by checking in its database node. It gives a list of nodes hosting service for the query’s
the resources belonging to users connected to the network. response. Just to connect directly to the corresponding node
One of the main advantages of this model is the central and start the resources access. The ring structure of super-
index to locate resources quickly and efficiently [7]. nodes allows for load balancing and dilutes the risk of local
failure and interruption of service. Indeed, if a super-node is
not available, other servers carry out its tasks and this will
be transparent to the user (automatic reconnection to
another server). In connecting the whole nodes to these
rings, we get the simplicity of a centralized system with the
robustness of a decentralized system.

2.3 Location by flooding


As a first step, each node looks for other nodes on the
network to announce its presence. Once integrated into the
network, nodes question each other. These requests will
remain active until the entire network has been covered.
Figure 6. Centralized network The operating principle is as follows (Figure 8): a node "A",
with a specific software (which shall act as the client and
1: the node sends its query to the server. server at a time), connects to a node "B" also equipped with
2: If the server knows the node that can respond to the this software. Thus "A" announces to “B” that it is "alive".
query, it sends the address otherwise it requests the nodes "B" relays this information to all neighbor nodes: "C", "D",
connected. "E" and "F". The latter relay the information to turn to
3: download direct connection between two nodes. connected nodes, and so on with all the nodes on the
The advantage of this technique lies in the centralized index network. Once "A" is recognized as "living" with other
of all directories, files or resources shared by the nodes in members of the network, it can search the content of interest
the network. In general, the updating of this database is in the directories or shared resources of other members’
done in real time, as soon as a new user connects. All users network. The request will be sent to all members of the
are required to be connected to the network of the server: the network, starting with "B", then all other members. If one
request reaches therefore all users, making the search more node has the resource, it forwards the information to "A".
relevant. The main problem however is that this type of The latter can then open a direct connection to that node
system allows a single point of entry into the network, and is and enable the service.
not immune from failure server that may block the whole
application.

2.2 Hybrid Location


The hybrid model involves super-nodes. A super-node in
these networks is node that meets several criteria. These
criteria relate to most often: the available bandwidth, CPU
power, availability on the network. With this model, we use
the advantages of both types of networks (centralized and
decentralized). Indeed, its structure reduces the number of
connections on each server, and thus avoids the bandwidth
problems [6].

Figure 8. Decentralized network

2.4 Location with distributed hash table


Figure 7. Hybrid Network If we exclude the possibility to use services based on
centralized directories or on messages flooding, we must
26 (IJCNS) International Journal of Computer and Network Security,
Vol. 1, No. 2, November 2009

consider that information about the resources location is 3.1 The hash function
distributed so that each node has to maintain a small The technique of hash table is used both for dissemination
amount of routing information. The search for a resource and for the localization of content. When a server wants to
runs so incrementally through the transfer hop by hop of the announce data content, it must first use a hash function.
corresponding request to the nodes which are better This function will allow the server to determine the set of
"informed" to respond. The nodes of such a system must nodes where to publish the content. In HCLFTP, the content
continually adapt to changes in their environment. The is published by a set of nodes which is located in a particular
following five ideas describe a set of mechanisms for geographic area of the network. It is assumed that the entire
locating with distributed hash table [9]. ad hoc network is divided into n zones, the hash function
- Identification of nodes and resources: each host is assigned will therefore contained a numeric identifier between 0 and
a numerical identifier calculated based on a hash function n-1.The content where the hash value is equal to i will be
applied to its IP address. Each document or shared resource hosted by nodes located in zone zi. The hash function also
is also assigned a numerical identifier (based on a hash allows the localization of content. Indeed, the request of a
function applied to its content or its name). user who is looking for a content, whose hash value is equal
- Distribution of nodes responsibilities: when a node is to i will be redirected to the area zi where it will be resolved.
present in the system, it is assigned the responsibility of The reason for the dissemination of content in an area, not a
several resources. single node, is mainly due to the fact that maintaining a
- Organization of routing information: for reasons of rigid, predefined structure, between the nodes in a mobile
scalability, each node must maintain a partial view of the radio environment, is quite costly in terms of energy and
network topology in which it participates. Thus, each node bandwidth. In addition, routing packets in ad hoc networks
knows a subset of nodes in the system. is far less efficient and less robust than in fixed networks,
- Lookup solving: a host must be able to request access to a making adjustments to take into account mobility is more
document or a shared resource in the system. To do this, it expensive. This results in degradation of system
must know the value of the key corresponding to this performance and limitation of its scalability.
resource. This query is called lookup. The result of a lookup
in the system, for a key k, is a reference to the node
responsible for key k. To resolve a lookup, a host starts by 3.2 Function split of the network
searching among the hosts it knows, that which has the The main objective of splitting the network is the
most feature to verify the relationship with the key distribution of its load while maintaining the topology of
corresponding to the searched resource. It forwards the zones. Indeed, if the number of nodes and contents in an
request to this node, which performs the same operation. unstructured area with no central entity to control exceeds a
The search spread from node to node and ends when it certain threshold, the localization/publishing costs of
reaches the host that is actually responsible for the key. The content become too high. Therefore, a zone could be divided
node issuing the request is then informed of the identity of recursively into sub areas to ensure better performance and
the node responsible for the key. achieve a uniform distribution of the load. In HCLFTP, to
- Management of arrivals and departures of nodes: during measure the load in an area, we rely on the following test.
the arrivals and departures of nodes, the system adapts and
reassigns responsibilities of the nodes to remain in a nh*nch < Th (1)
coherence state. To enter the system, a node only needs to
have access to a node already present. Where nh is the number of nodes located in the zone h, nch
is the number of content hosted in the same area and Th
3. Proposed protocol: HCLFTP means the threshold below which a uniform distribution of
the load is ensured. Indeed, nh*nch represents the cost of
In this section, we present HCLFTP designed specifically for dissemination of nh contents to nch nodes, this distribution is
ad hoc networks. Before describing the basic concepts and based on a simple method of flooding. The decomposition of
architecture of HCLFTP, we present the assumptions made the network into zones is applied whenever nh*nch exceeds
about the ad hoc environment considered in the design of Th. This recursive decomposition stops once nh* nch
this protocol. The target environment of HCLFTP is a becomes below the threshold Th. The advantage of using a
mobile ad hoc network, dense and scale. Indeed, the nodes recursive contents’ dissemination is the uniform distribution
can connect, disconnect, or move in the network at any time. of load in dense and non-uniform networks. Indeed,
All nodes know their own location information using GPS choosing an area of a network, which is potentially broken
or by using relative coordinate. The objective of HCLFTP is down into different areas, requires knowledge of local
to provide an effective mechanism for localization of content information concerning the density and the number of
for dense large and scale ad hoc networks. To achieve this contents to host. Our goal is to use a protocol for content
objective, several components have been implemented: a localization in which the decision is recursively delegated to
hash function to connect the content identifier to the appropriate nodes. In the following, we present the
corresponding area, a recursive function for the split and mechanism of the network decomposition in the simplest
fusion of the network, and a function for dissemination and case where the topology of the network is uniform and the
localization of content based on geographical properties. hash result is evenly distributed between 0 and n-1. In a first
step, if the inequality nc1 * n1> Th is checked (n1and nc1
mean respectively the number of contents and the number of
(IJCNS) International Journal of Computer and Network Security, 27
Vol. 1, No. 2, November 2009

nodes in the network), the network is then splitted into n decide whether the query will be resolved directly in the
equal areas. Each zone contains n 1 nodes that should host current area or if it will be forwarded to the next level of the
n hierarchy of the area. Several approaches use the hash
functions with parameters like content and the current level
nc 1 contents. If in a second step, if nc 1*n 1 > Th then
n 2
of the hierarchy to determine to which sub-area the query
n will be forwarded. Suppose that the hash value is equal to k,
each area will be divided into n equal area. Another so the query will be routed to the area zk where the same
decomposition is applied in the case where nc 1 * n 1 > Th methodology will be used. In other words, the request will
4
n be resolved or will be directly forwarded to the next level
and so on. The number of times necessary to distribute the and so on, until the content is located. To announce content,
network load can be easily found by calculating the the provider sends a notification message to the network in
minimum number i that satisfies the following relationship. the same manner as for localization of content. When the
announcement message finally reached the target area, the
mechanisms of replication of content can be applied within
nc * n 1 1 <Th (2)
this area to improve the availability of content. The
2i
n mechanisms of localization and deployment of content are
this means that i satisfies the following relationship. completely decentralized. Moreover, only a limited number
 
 log nc 1 n 1
* of nodes are involved in the routing and resolution of

 T  announcement messages and requests for localization. Since
i = int  h
+ 1 (3 )
 2 * log n  all nodes are not required to maintain routing information
 
  and a comprehensive understanding of the whole network is
In HCLFTP, the decomposition of the network nodes is not required, HCLFTP can be regarded as scalable for large
delegated to the central region of each zone. We call the ad hoc networks.
whole central region of nodes that are located a short
distance (d) from the center of gravity of the area. Each
3.4 Function merge of the network
node of the central region maintains an estimate of the
number of nodes and contents within this zone. Thus, it can The problem that arises during the decomposition of the
assess whether the threshold decomposition of the network network to a uniform distribution of the load is: when and
is reached or not. To ensure a recursive decomposition of how to deploy a merge? We note that the main motivation
the network, the hash function above is extended to the use for the decomposition of the network is to maintain a
of two parameters: the content c and the decomposition level reasonable cost for content distribution within an
h. If the network is divided into equal areas, the content c is unstructured area. However, this is possible only in a
hashed with the parameters (c, h = 0) resulting in an ID hierarchical structure in each zone (recursive division of the
number k between 0 and n-1. Thus, the content will be areas), which generates an additional cost of the delivery of
deployed in the area zk. In order to evenly distribute the load messages to the next sub area. For this reason, we propose to
in the area zk, we divide it into n equivalent areas zk,0 … deploy a merging protocol if the query can be resolved
zk,n-1. The content C will then be chopped with the directly in the zone itself.
parameters (c, h = 1) to a number j between 0 and n-1, and Since each node in the central region knows the number of
must therefore be deployed in the area zk,j. This process nodes n and the number of contents nc within the zone, it
continues until the decision to deploy content in the area zk,j can trigger the decomposition of the area only if nc*n> Th.
is taken. The reason for the introduction of a hierarchy level We also propose that the merging process is triggered only
h in the hash function is to efficiently distribute content in if nc*n> Th-H (H> 0) to maintain the stability of the
the case of decomposition of the network. In addition, the splitting/merging of the network. In addition, each node in
hash of the content just does not allow distribution of the central region maintains information regarding the cost
content within the area to a level of decomposition h ≥ 1. In of dissemination of content within the zone. Thus, the
general, to a level of decomposition h, the content c1 ... ci merging is to deliver content to the next broken areas and
combined with h will be chopped at the same identifier zx. disseminate the content in the current area. This intrinsic
At decomposition h+1, the contents c1 ... ci combined with feature allows the passage from splitting to merging, locally,
h+1 will not necessarily be chopped to the same identifier and vice versa without any additional cost.
zx,y. We can perform so a good distribution of content within
the zone zx. 3.5 Designation of an area
Each message of announcement/location must be redirected,
3.3 Location and dissemination of content finally, towards the central region of the current area and
To locate content in a network that is based on HCLFTP, the must be solved by one of the nodes located in this region.
user U1 sends its first request under one of the four For this reason, we need a mechanism that allows the
geographical directions (north, south, east or west) based on designation of the central area of each zone with a fairly
geographical routing. In a dense network, the request will be reasonable cost.
routed by a node which knows the central region of the - Election of “corner nodes" for a rectangular network: if
network hosting the content. The first node in the central the prior knowledge of the positions of all nodes was
region, which receives the request, has the responsibility to possible, the delimitation of the area could be made simply
by identifying all nodes that are on the perimeter of
28 (IJCNS) International Journal of Computer and Network Security,
Vol. 1, No. 2, November 2009

network. But, having whole knowledge of the positions of In addition, nodes in the central region are also responsible
all nodes in a mobile ad hoc environment is costly and even splitting the area and updating the current level of
impossible. Therefore, a distributed algorithm for the decomposition h. To achieve this, each node in the central
delimitation of the area based on local geographical region periodically sends a message that contains the
positions is developed in HCLFTP. The main idea is to use number of advertisements of contents received.
specific nodes called "corner nodes" to delimitate the areas. Although the number of contents can be easily found within
We assume that all nodes are distributed in a rectangular a zone, the estimated number of nodes within this zone is
region. For a rectangular network, if a node does not receive not quite simple. We can include in the deployment of the
messages from the angles which the direction makes an decomposition of the network cost estimate for the location
angle belonging to the interval 90°±α then the node self of contents which itself depends on the number of nodes.
proclaimed "corner node". The advantage of this algorithm
for electing the "corner node" is that it is based on the 4. Simulation of HCLFTP
position of its direct neighbor’s nodes. Since the geographic
routing is based on this information, the positions of direct We conducted a series of simulation in Network Simulator
neighbors nodes are already available and thus the cost of “NS” by considering two metrics: the number of perimeter
messages generated for the election of the "corner nodes” is nodes elected and the number of confusions. We designate
very low. Following the election of a "corner node", the by the number of confusions the number of perimeter nodes
latter tends to inform all the other "corner nodes" by sending elected but who are not actually located on the perimeter.
a message of "Corner Announcement" to "corner nodes” of We have assumed in the simulations that each content size
the same area. This message is sent in two geographic is 512 bytes.
directions (east/south, west/south, east/north or west/north) These metrics were measured for different values of the
according to its location. If a node “corner node” receives a angle α. Each result presented is an average of results
message of "Corner Announcement ", it checks if the obtained on three different topologies. Nodes are deployed
message contains a new "corner node". If a new “corner randomly in an area of 1000m*1000m. Each simulation
node” is found, it updates the local list of "corner nodes” lasts 10 s. Table 1 shows the results for different network
and sends to its neighbors “corner nodes” a new message of sizes.
“Corner Announcement" which contains a list of all the We found that the number of nodes elected as perimeter
“corner node" whose it has known as well as their positions. nodes exceeds the number of nodes actually in the
After a stabilization period, all the “corner nodes” will be perimeter. The confusion increases with the angle α. Table
identified and their respective coordinates allow estimating 2 shows the change in the ratio between the actual perimeter
the position of the center of gravity of the area. The election nodes (real PN) and the elected perimeter nodes (elected
of the "corner nodes” is periodic due to the mobility of PN) as a function of the angle α.
nodes. Similarly, messages like "Corner Announcement"
will be periodically sent to the neighbors "corner nodes”'. In Table 1. Number of perimeter nodes according to α
HCLFTP, the central region of a zone is defined by its
center of gravity. Each "corner node" has a local list of all Nodes’ α Nodes’ α Nodes’
the other "corner nodes”. Thus, the position of center of Number (degre Number (degre) Number
gravity PCG can be calculated without any further exchange in the ) in the in the
of messages. This position will be then propagated to all network network network
nodes in the area. The propagation of the position of center 100 40 20 60 25
of gravity is performed as follows. Each "corner node" sends
200 40 25 60 30
a message of "Gravity Announcement" containing the
300 40 30 60 70
coordinates of the center of gravity along the perimeter and
400 40 60 60 90
along the diagonals. At each hop, the neighbor node is the
500 40 70 60 110
only node that is supposed to receive this message, but all
600 40 100 60 125
nodes who hear the same message will also be informed of
the coordinates of the center of gravity. However, the 700 40 110 60 140
neighbor node is solely responsible for the delivery of the 800 40 130 60 160
message announcement. Nodes located within a distance 900 40 155 60 180
d≤dh from the center of gravity are considered as belonging 100 80 25 100 30
to the central region of the area, consequently, they are 200 80 35 100 60
responsible for routing requests and advertisements. We 300 80 85 100 120
recall that the messages advertisement/location will be sent 400 80 95 100 150
by the provider/client in one of four geographic directions 500 80 120 100 180
and will be intercepted by one of the nodes that know the 600 80 140 100 220
coordinates of the center of gravity. The messages will be 700 80 156 100 245
forwarded to the center of gravity and will be resolved by the 800 80 180 100 300
first node on the routing path belonging to the central region 900 80 200 100 380
of the zone. The latter will decide whether to forward the 1000 80 260 100 425
message to another level in the area.
(IJCNS) International Journal of Computer and Network Security, 29
Vol. 1, No. 2, November 2009

Table 2. The ratio real PN/elected PN according to α. [3] G. Zussman, A. Segall, “Energy Efficient routing in ad
hoc disaster recovery networks”, in Proceedings of
Nodes’ α real PN/ α real PN/ IEEE INFOCOM, San Francisco, USA, 2003.
Number (degre) elected (degre) elected PN [4] C. Bettstetter, C. Renner, “a comparison of service
in the PN discovery protocols and implementation of the Service
network Location Protocol”, in Proceedings of EUNICE 2000,
100 40 0,90 60 0,80 Twente, Netherlands, September 2000.
200 40 0,85 60 0,60 [5] S. Cheshire, “DNS-based Service Discovery”, internet-
300 40 0,65 60 0,55 draft, December 2002.
100 80 0,71 100 0,60 [6] J. Govea, M. Barbeau, “Results of comparing bandwidth
200 80 0,65 100 0,50 usage and latency: service location protocol and Jini”,
300 80 0,45 100 0,36 Workshop on Ad Hoc communications, Bonn,
Germany, September 2001.
Furthermore, we found that the answer to a query for [7] A. Rao, C. Papadimitriou, S. Shenker, I. Stoica,
extracting the same content is not always provided by the “geography routing without location information”, in
same node but by one of the nodes in the central region. In Proceedings of the 9th annual international conference
addition, mobility or failure of one of the nodes in the on mobile computing and networking, ACM Press, pp
central region does not cause inaccessibility for the content 96-108, 2003.
because there are multiple nodes in the central region, [8] E. Cohen, S. Shenker, “Replication strategies in
which are able to fulfill the request. However, the response unstructured peer-to-peer networks”, in ACM
time is not always the same for the same request but it SIGCOMM Conference, august 2002.
depends on the current traffic in the network and the [9] T. Hara, Y. Loh, S. Nishio, “Data replication methods
location of the node responding to the request. based on the stability of radio links in ad hoc
networks”, in 14th international workshop on database
and expert systems applications (DEXA’03),
5. Conclusion and future work September 2003.
[10] T. Hara, “Effective replica allocation in ad hoc
The sources of information are now spread across networks.
networks for improving data accessibility”, in
However, access to these sources poses challenges for users
Proceedings of IEEE INFOCOM 2001, pp 1568-1576,
and applications that need it. Even if several solutions are
April 2001.
proposed for access to sources, it still lacks the support of
[11] A. Datta, M. Hauswirth, K; Aberer, “Updates in highly
the dynamism that is essential in mobile environments such
unreliable, replicated peer-to-peer systems”, in 23th
as mobile ad hoc networks. The protocols for service
International conference on distributed computing
discovery mechanisms should provide autonomous
systems (ICDCS), May 2003.
management of mobility and quality of service and fault
tolerance. In this paper, we studied the features of some
localization protocols in dynamic environments. The Author Profile
comparison of these protocols aimed to deduce the ideal
Maher Ben Jemaa received the
mechanism for the discovery of data. From these different Engineering degree from National School
approaches, we presented a new solution for the discovery of of Computer Science in Tunisia in 1989,
services deployed on mobile sources. The protocol is DEA degree in Computer Science from
intended to HCLFTP for location services in a central region University of Nice France in 1989 and the
rather than in one node in mobile networks. It is based on Phd in Computer Science from INSA of
dynamic hash table. The prospects of this work are the Rennes France in 1993. Actually, he is an
evaluation of this protocol on a real platform and explore Associate Professor in Computer Science
the various proposals currently available for descriptions of in National School of Engineers of Sfax.
He is carrying his research in the ReCAD
data sources.
research unit (www.redcad.org). His research topics concern
Mobile communication, routing and fault tolerance in wireless
References networks;

[1] A. Oram. Gnutella, “chapter 8 in Peer-to-Peer:


Harnessing the Power of Disruptive Technologies”,
pages 94–122. O’Reilly, May 2001.
[2] I. Clarke, O. Sandberg, B. Wilez, T.W. Hong, “Freenet :
a distributed anonymous information storage and
retrieval system, Designing privacy enhancing
technologies”, International workshop on design issues
in anonymity and unobservability, Lecture Notes in
Computer Science, pp 46-66, Berkeley, USA, Springer,
July 2000.
30 (IJCNS) International Journal of Computer and Network Security,
Vol. 1, No. 2, November 2009

Wavelet based Watermarking Technique using


Simple Preprocessing Methods
S.MaruthuPerumal1, B.Vijaya Kumar 2, L.Sumalatha3 and Dr. V.Vijaya Kumar 4
1
Research Scholar, Dr MGR University, Chennai, T.N. India.
Associate Professor & Head, Department of IT, Godavari Institute of Engineering & Technology, Rajahmundry, A.P. India.
maruthumail@gmail.com
2
Professor & Head, Department of CSE, Lords Institute of Engineering & Technology, Hyderabad, A.P. India.
vijaysree.b@gmail.com
3
Associate Professor & Head, Department of CSE, University college of Engineering, JNTU Kakinada, A.P. India.
sumapriyatham@yahoo.com
4
Dean & Professor, Department of CSE&IT, Godavari Institute of Engineering & Technology, Rajahmundry, A.P.India.
vakulabharanam@hotmail.com

Abstract: The Sudden increase in the internet applications has which can make the watermarking process robust. In the
lead people into digital world. Digital watermarking facilitates e- recent times wavelet based digital watermarking has
client distribution, reproduction and manipulation over become, a very active research area. Watermarking
networked information systems for image, audio clips, and approaches are classified into two categories: Spatial
videos. To address this, the present paper proposes a digital domain and Transform domain methods. Transform domain
image watermarking technique based on various preprocessing watermarking techniques are more robust in comparison to
methods. The watermark is inserted on the selected pixels based
spatial domain methods. Among the transform domain
on some preprocessing methods applied on a L-level wavelet
transformed image. The Level L has been chosen based on the watermarking techniques, Discrete Wavelet Transform
size of the watermark and window. To test the robustness of the (DWT) based watermarking techniques are gaining more
proposed method, various peak signal noise ratios are applied. popularity because of their superior modeling of Human
The experimental result indicates imperceptibility, security, Visual System [2]. To achieve copyright protection, a
unambiguity and robustness of the present method. watermarking scheme for digital images must have the
Keywords: Wavelet Transformation, Preprocessing, Peak following properties: (1) Imperceptibility or
Signal Noise Ratio. low degree obtrusiveness: it should be extremely difficult to
distinguish between the host image and the watermarked
1. Introduction image. The quality of the image should not be compromised.
The great advancement taken place in the field of Internet (2) Security: a watermark should be statistically
has facilitated the transmission, wide distribution, and undetectable. The watermarking algorithm must be public,
access of multimedia data in an effortless manner. The use with security depending only on keeping the key secret [11,
of digitally formatted image and video information is 12, 15]. Only the owner of the host image should be able to
rapidly increasing along with the development of extract or remove the embedded watermark. (3) Fast
multimedia broadcasting, network databases and electronic embedding / retrieval: The speed of a watermark embedding
publishing [3, 4, 5, 6, 19]. All these developments are algorithm is important for applications where documents are
proceeding with a serious drawback: if the media data is marked ‘on the fly’ (4) No reference to original document:
copyrighted, the unlimited copying of media data may cause For some applications, it is necessary to recover the
considerable financial loss, the protection of intellectual watermark without requiring the original, unmarked
property rights has become an important issue in the document (which would otherwise be stored in a secure
network-centric world. One effective solution to the archive). (5) Multiple watermarks: It may also be desirable
unauthorized distribution problem is the embedded of digital to embed multiple watermarks in a document. For example,
watermarks into multimedia data [10]. New progress in an image might be marked with a unique watermark each
digital technologies, such as compression techniques, has time when it is downloaded [8]. (6) Robustness: when the
brought new challenges in to watermarking. Various quality of the host image is degraded by attacks such as
watermarking schemes that can employ different techniques blurring, sharpening, scaling, cropping, noising, or JPEG
have been proposed over the last few years [1, 7, 9, 10, 13- compression, it should still be possible to retrieve and
19]. To be effective, a watermark must be imperceptible identify embedded watermark. The watermark must be
within its host, easily extracted by the owner, and robust to retrievable if common image processing or geometric
intentional and unintentional distortions [2]. In specific, distortions are performed. (7) Unambiguity: the retrieved
DWT has wide applications in the area of image watermark should clearly verify the copyright owner of the
authentication. This is because it has many specifications image. In addition, ideal watermarking schemes should also
(IJCNS) International Journal of Computer and Network Security, 31
Vol. 1, No. 2, November 2009

be able to solve the problem of multiple claims of the help of a flow chart given in figure 2. Based on the
ownership. flowchart a block diagram for lena image is given in figure
The rest of this paper is organized as follows: In Section 2, 3. The block diagram of figure 3 clearly indicates the
the wavelet transformation of images is discussed in detail. process of inserting the watermark text in lena image after
The proposed method along with various pre-processing three levels of wavelet transform on LL sub image. The
methods is explained in Section 3. In Section 4, the watermark can be inserted on any LL, LH, HL or HH sub
performance of the proposed method is analyzed. Finally, 5th bands. The same process can be applied on any wavelet
section deals with conclusions. transform.

2. Wavelet Transformation of Images START


The wavelet transformation is a mathematical tool for
decomposition. The wavelet transform is identical to a
hierarchical sub band system, where the sub bands are Cover Image(C)
logarithmically spaced in frequency. The basic idea of the Watermark (X)
DWT for a two-dimensional image is described as follows. X-Characters
An image is first decomposed into four parts based on
frequency sub bands, by critically sub sampling horizontal
Total No. of Bits
and vertical channels using sub band filters and named as of Watermark
Low-Low (LL), Low-High (LH), High-Low (HL), and High- W=2*X*b
High (HH) sub bands as shown in figure 1. To obtain the
next coarser scaled wavelet coefficients, the sub band LL is
further decomposed and critically sub sampled. This process
is repeated several times, which is determined by the C=N*M*b
application at hand. The block diagram of this process is
shown in figure 1. Each level has various bands information
such as low–low, low–high, high–low, and high–high COUNT =1
frequency bands. Furthermore, from these DWT
coefficients, the original image can be reconstructed. This
reconstruction process is called the inverse DWT (IDWT). If
C[m,n] represents an image, the DWT and IDWT for IF Y Compress the
C[m,n] can similarly be defined by implementing the DWT C>W && Image by using
COUNT<=4 DWT
and IDWT on each dimension and separately.

N ++COUNT

Divide the Compressed


Image into non Overlapping
Equal Blocks

Select the hit pixel in which


watermark is to be inserted
based on preprocessing
methods

Figure 1. Representation of L-Levels of DW Transformation


Insert Watermark
3. Methodology
To carryout the proposed method, the compressed image STOP
is divided into non overlapping blocks of size 0…m-1 x 0…
m-1, where m-1 is an integer. A preprocessing method is Figure 2. Flowchart for the proposed scheme
applied on the selected window. Based on the preprocessing
method the hit pixel is decided. Hit pixel is a pixel where
the watermark will be inserted. This process is applied on
the L level wavelet transformed image. The L level is
chosen by the principle that the wavelet image should
contain at least double the number of pixels than the
required wavelet text. The entire process is explained with
32 (IJCNS) International Journal of Computer and Network Security,
Vol. 1, No. 2, November 2009

79 86 74 96 81 76

74 75 82 86 84 82

76 75 79 84 82 79

76 79 81 83 80 76

78 77 74 72 70 74

82 80 76 79 78 80

Figure 3. Block diagram of watermark insertion on wavelet


Figure 4. Grey level values of an image of 6 x 6
3.1 Preprocessing Methods
For the selection of the hit pixel various preprocessing
methods are applied. The applied preprocessing methods are 79 86 74 96 81 76
useful in smoothening, reducing noise, contrast and
intensity etc. The preprocessing methods depend on image 74 75 82 86 84 82
characteristics in a predefined region about each pixel in the
image. The preprocessing methods used in the present paper 76 75 79 84 82 79
are mean, median, mode, variance and standard variation
(SD) as shown from equation 1 to 5 respectively 76 79 81 83 80 76
 z −1 z −1 
 ∑ ∑ P(i, j ) 
 i =0 j =0  ..……………...……(1) 78 77 74 72 70 74
Mean = int  
z
 
82 80 76 79 78 80
 

  z −1 z −1
 Figure 5. Hit pixels of the original image of figure 4
Median = middlevalue ASC  ∀ ∀ P (i , j )  
  i=0 j =0   ..…(2)
4. Experimental Result and Analysis
For the experimental analysis different images of size
 z −1 z −1

Mode = mod value ∀ ∀ P(i, j )  …....….….(3) 64x64 are selected and the proposed method is applied. The
 i =0 j =0  cover images considered in the present paper are brain
image, lena image, barbara image, camera man image, and
baboon image which are shown from figure 6(a) to 6(e)
 z −1 z −1 z −1 z −1

 ∑∑ P (i, j ) ∑∑ ( P (i, j )) 2  … .(4) respectively. The figure 7(a) to 7(e) shows 3-level wavelet
 i=0 j =0 i =0 j = 0  compressed image. The figure 8(a) to 8(e) shows the wavelet
Variance = int  − 
z z decomposed image with the watermark text “MGRU”
  embedded. The figure 9(a) to 9(e) shows the reconstructed
  watermarked image
1/ 2
 z −1 z −1 z −1 z −1
 To measure the quality of watermarked images, the peek
 ∑∑ P (i , j ) ∑∑ ( P (i , j )) 2  signal-to-noise ratio (PSNR) is used. Which is given in the
SD =  i =0 j =0 − i =0 j =0  ……….(5)

equation (6)
z z   …(6)
  
PSNR ( C , W ' ) = 10 log 
255 2 xMxN 
  
 ∑ ∑
M
i =1
N
j =1
[f ( x i y j ) − f '( xi y j ) ]2 

where P(i,j) represents the gray level value at the location
i,j of the window, z is no. of pixels in the block. where C is the cover image and W` is the watermarked
image, with dimensions N X M. The PSNR is applied for all
The figure 4 shows the grey level image of size 6 x 6. cover images of figure 6(a) to 6(e) and watermarked images
Whereas figure 5 shows the hit pixel of figure 4, which are at figure 9(a) to figure 9(e) and the results are tabulated in
marked with circles based on the mean preprocessing table 1. The table 1 clearly indicates that PSNR values for
method. all the proposed preprocessing methods. From the table 1 it
is clearly evident that all the proposed preprocessing
methods are showing above 50db, which indicates the high
robustness.
(IJCNS) International Journal of Computer and Network Security, 33
Vol. 1, No. 2, November 2009

Table 1: Five different reconstructed images expressed in The reconstructed watermarked images of figure 9(a) to
PSNR (db) for different methods 9(e) clearly indicate the clarity, imperceptibility, robustness
of the image when compared to figure 6(a) to 6(e).

5. Conclusion
The PSNR values clearly indicate that high robustness of
the proposed method. The proposed preprocessing
techniques can be extended on any window size and the
(a) (b) (c) watermark content may also be increased from minimum of
two characters to maximum of any length depending on the
size of the image. The advantage of preprocessing methods
for selecting the hit pixel over the other methods on wavelet
image is of maintaining the important the characteristics of
the image without any loss of image content or the
information in the selected region.

(d) (e)
Figure 6. The cover images a) Brain Image b) Lena Image Appendix A: Processing Methods
c)Barbara Image d) Cameraman Image e) Baboon Image
Images Brain Lena Barbara Camera Baboon
Used Pre Image Image Image Man Image Image
Processing
Methods

54.1
53.88 56.19 53.4 52.76
Mean 5
(a) (b) (c) (d) (e)
Figure 7. Compressed cover image a) Brain Image b) Lena 53.1
53.40 55.05 53.88 52.97
Image c)Barbara Image d) Cameraman Image e) Baboon Median 8
Image
53.8
53.40 54.15 53.88 55.05
Mode 8

55.4
54.43 53.88 53.40 55.40
Variance 0

(a) (b) (c) (d) (e) 53.6


Standard 54.43 53.88 53.88 54.73
3
Figure 8. Compressed watermarked image a) Brain Image Deviation
b) Lena Image c)Barbara Image d) Cameraman Image
e) Baboon Image
Acknowledgement
The authors would like to express their gratitude to
Sri K.V.V. Satyanarayana Raju, Chairman, and Sri K. Sasi
Kiran Varma, Managing Director, Chaitanya group of
Institutions for providing necessary infrastructure. Authors
would like to thank Dr MGR University Chennai for the
suggestions and guidelines given and the anonymous
(a) (b) (c)
reviewers for their valuable comments.

References
[1] Aboofazeli. M, G. Thomas and Z. Moussavi,
“A wavelet transform based digital image
watermarking scheme,” in Proc. IEEE CCECE, vol.
(d) (e) 2, pp. 823 – 826, May 2004.
[2] Adhipathi Reddy A , B.N. Chatterji “A new wavelet
based logo-watermarking scheme”, Pattern
Figure9. Reconstructed watermarked image a) Brain Image Recognition Letters 26 (2005) 1019–1027
b) Lena Image c)Barbara Image d) Cameraman Image [3] Andreja Samcovic , Jan Turan, “ Attacks on Digital
e) Baboon Image Image Wavelet Image Watermarks”, Journal of
.
34 (IJCNS) International Journal of Computer and Network Security,
Vol. 1, No. 2, November 2009

Electrical Engineering Vol 59, No:3 2008. Page131- Authors Profile


138
[4] Bojkovic Z et al “Multimedia Contents Security: S.MaruthuPerumal received his M.E.
Watermarking Diversity and Secure Protocols” Serbia in Computer Science and Engineering
and Montenegro,Oct 1-3 2003. from Sathyabama University Chennai
[5] Christine I Podilchuk , Edward J Delp “Digital in 2005. He is having eleven years of
Watermarking Algorithms and Applications” IEEE teaching experience. At present he is
Signal Processing Magazine 2001. working as an Associate Professor and
[6] Gwo-Chin Tai and Lomg Wen Chang “A Novel Public Head Department of IT Godavari
Digital Watermarking for Still Images based on Institute of Engineering and
Encryption Algorithm” IEEE 2003. Technology, Rajahmundry. He is Pursuing his Ph.D at Dr
[7] Guzman V. H, M. N. Miyatake, and H. M. P. Meana, MGR University Chennai under the Guidance of Dr V
“Analysis of a wavelet-based watermarking VijayaKumar His research interest includes Image
algorithm,” in Proc. IEEE CONIELECOMP, pp. 283- processing, Digital Watermarking, Steganography and
287, 2004. Security.He is a life member of ISCA,IAENG.
[8] Jonathan K Su ,Frank Hartung and Bernd Girod,
“Digital Watermarking of Text, Image and Video B Vijaya Kumar completed his M S in
Documents”, Computer. & Graphics, Vol. 22, No. 6, CSE from DPI, Donetsk, USSR in 1993. He
pp. 687 - 695, 1998 worked as Software Engineer in Serveen
[9] Kundur.D and Hatzinakos.D, “Digital watermarking Software Systems pvt. Ltd. Secunderabad,
using multiresolution wavelet decomposition,” in India for four years (1993-1997). After that
he worked as Sr. Assistant Professor in
Proc. IEEE ICASSP, vol. 5, pp. 2969 – 2972, May
JBIET, Hyderabad for three years later
1998. joined in Royal Institute of Technology &
[10] Liang-Hua Chen and Jyh-Jiun Lin “Mean Science, Hyderabad as Associate Professor
quantization based image watermarking”, Image and and worked there for four years. Presently he is working as
Vision Computing 21 (2003) 717–727 Professor & Head of CSE Department in Lords Institute of
[11] Juan R. Hernandez Martin,Lysis SA et., al Engineering & Technology, Hyderabad, India. He is pursuing his
“Information Retrieval in Digital Watermarking” Ph.D. in Computer Science under the guidance of Dr
IEEE Communication Magazine 2001. Vakulabharanam Vijaya Kumar. He is a life member of CSI, ISTE,
[12] LIU Tong, QIU Zheng –ding “The Survey of Digital NESA and ISCA. He has published more than 10 research
publications in various National, Inter National conferences,
Watermarking based Image Authentication
proceedings and Journals.
Techniques” ICSP 2002 Proceedings.
[13] Meerwald . P and A. Uhl, , “A survey of wavelet- L. Sumalatha completed her B.Tech from
domain watermarking algorithms,” in Proc. SPIE, Acahrya Nagarjuna University and M.Tech
vol. 4314, pp. 505-516, 2001. CSE from JNT University Hyderabad. She
[14] Mong-Shu . L, “Image compression and is working as Head Departement of CSE
watermarking by wavelet localization,” Intern. J. College of Engineering JNT University
Computer Math., vol. 80(4), pp. 401-412, 2003. Kakinada. She is having nine years of
[15] Wang S-H. and. Lin Y-P, “Wavelet tree quantization teaching experience. She is pursuing her
Ph.D from JNT University Kakinada. Her
for copyright protection watermarking,” IEEE
research areas includes network security,
Transactions on Image Processing, vol. 13, pp. 154 – digital imaging and digital watermarking.
165, Feb. 2004.
[16] Wang . Y, J. F. Doherty, and R. E. Van Dyck, Vakulabharanam Vijaya Kumar received
"A wavelet-based watermarking algorithm for integrated M.S. Engg, degree from
ownership verification of digital images," IEEE Tashkent Polytechnic Institute (USSR) in
Trans. Image Processing, vol. 11, pp. 77-88, 2002. 1989. He received his Ph.D. degree in
05-516 Computer Science from Jawaharlal Nehru
[17] Xia-mu Niu and Sheng – he Sun “Adaptive Technological University (JNTU) in 1998.
GrayLevel Digital Watermark” Proceedings of ICSP He has served the JNT University for 13
years as Assistant Professor and Associate
2000.
Professor and taught courses for M.Tech
[18] Yuk Ying CHUNG and Man To WONG students. He has been Dean for Dept of CSE and IT at Godavari
“Implementation of Digital Watermarking System”. Institute of Engineering and Technology since April, 2007.His
IEEE 2003 research interests include Image Processing, Pattern Recognition,
[19] Zhe Ming Lu, Chun He Liu, et al, “Image Retrieval Network Security, Steganography, Digital Watermarking, and
and content Integrity Verification Based on Image retrieval. He is a life member for CSI, ISTE, IE, IRS, ACS
Multipurpose Image Watermarking Scheme” – and CS. He has published more than 120 research publications in
International journal of Innovative Computing, various National, Inter National conferences, proceedings and
Information and Control Vol – 3,Number 3, June Journals.
2007.
(IJCNS) International Journal of Computer and Network Security, 35
Vol. 1, No. 2, November 2009

An Assimilated Approach for Statistical Genome


Streak Assay between Matriclinous Datasets
Hassan Mathkour1, Muneer Ahmad2 and Hassan Mehmood khan3
King Saud University, Department of Computer Science,
College of Computer and Information Sciences,
P.O. Box 51178, Riyadh 11543, Saudi Arabia
1
binmathkour@yahoo.com
2
muneerahmadmalik@yahoo.com
3
hasmkh@gmail.com

Abstract: Genome Streak Assay for matriclinous datasets aided artistries are not the solution. There is need to work in
by using ORF (Open Reading Frames) artistries is a computational molecular biological experiments by means
titillating area of inquest for bioinformatics inquisitors of DNA streak assay. Finding unique streak on the entire
recently. There is a strong inquest focus on metaphorical target genome is one of the most important problems in
assay between matriclinous behaviors and multeity of molecular biology [3].
peculiar species. Antagonistic to choate genome streak The overall goal of this paper is to adduce an assimilated
assay, scientists are now trying to contemplate peculiarly approach that performs metaphorical assay between same
ensconced assay to get a better peculiarly of pertinency species revealing that peptide translation in both has tenor
among matriclinous datasets. This marvel will better help to
of aberrations. This task is accomplished by using ORF with
understand species. We are adducing an ORF statistical
statistical assay. The method used for this purpose is a
assay for matriclinous data-sets of species Chimera
Monstrosa and Poly Odontidae. For completion of this composite artistry that consists of series of filter from
assay, we use a mongrel approach that combines generic preprocessing level to final assay.
contrivance for statistical assay with specific approach The human genome project has built rich databases which
designed for out performance. At first exemplification, attracted inquest titillates from biologists and computer
matriclinous datasets are rarefying for better usage at next scientist to explore and mine these precious data-sets. The
level. These sets are then passed through ensconces of computer aided applications now can reveal the hidden
filters that perform DNA to Protein translation. Statistical information in complex helix DNA structure. They also
correlation is performed during this translation. This made it possible to perform fast and accurate assay. This has
ensconced architecture helps in better understanding of been made effective with the availability of cost effective
tenor of affinity and aberrations in genomic streaks. and handy assay tools. Scientists have developed novel
ideas, implemented and resolved complex situations in
Keywords: Open Reading Frame, codon count, amino acid, computational biology whose direct feasible solution was not
preprocessing filter, Nucleotide possible yielding optimal solutions in some cases for streak
assay, an NP hard problem [5, 9, 14, 17].
1. Introduction
This paper is organized as follows. Section 2 highlights
Due to existing and continuously growing bulk of biological some related work. Section 3 describes the proposed artistry
data coming from genome projects and experiments now a (elaborated in subsections). Section 4 contains fundamental
days. Protein structure prediction and its systematic concluding remarks for this metaphorical assay. Section 5
translation needs an efficient and effective way to streak, re-adduces an acknowledgement and section 6 contains
analyze and compare coded biological DNA streak References.
information. The genome streak assay is directly related to
the streak correlation and alignment. Streak affinity is a way 2. Literature review
to predict the functional affinity among genes and have been Rajita Kumar [17] gives an approach for a distributed
used as a tool for functional prediction. Assay and bioinformatics computing system. It was designed for
Correlation of DNA streaks and genes is useful for finding disease detection, criminal forensic and protein assay. It is a
the fact that how these genes are organized and what are the combination of peculiar distributed algorithms that are used
similarities and aberrations [1]. These fundamental to search and identify a triplet repeat pattern in a DNA
problems are NP hard [14, 17] and need optimal solution streak. It consists of search algorithm that computes the
that can be achieved by improving algorithms and number of occurrences of a given pattern in a matriclinous
computing architecture. [2]. A little work has been done in streak. The distributed sub-streak identification algorithm
mongrel statistical assay of genomic data against was to detect repeating patterns with sequential and
exponentially increasing problem size. Usage of Computer distributed implementation of algorithms relevant to
36 (IJCNS) International Journal of Computer and Network Security,
Vol. 1, No. 2, November 2009

peculiar triplet repeat search patterns and matriclinous properly choosing the cut-off frequency and wavelength de-
streaks. The result of this system shows that as complexity noising frequency, some enhancement can be made for
of the algorithm increases, the response time also increases. signal to noise ratio and processed signals can be made for
There is space to make this work better for more DNA requirement of single base pair resolution in DNA
streaks of various lengths. sequencing and vector of targeting signal can be
Ken-ichi Kurata [9] adduces a artistry to find unique decomposed into orthogonal matrix of wavelength functions.
genome streaks from distributed environment databases. This is an iterative method with levels n and can be
Ken-ichi used implementation of the method upon the conventionally reconstructed by inverse DWT.
European Data Grid and showed its results. The author Binwei Weng et al., [14] apply wavelength transform to
worked on the unique streaks of E. Cole 0157 (12 genome). extract features from the original measurements. They
The genome is divided into smaller pieces being processed partition the data in subsequent partitions by a hierarchal
individually. In an example quoted by author, the total file clustering method, the terahertz spectroscopy of peculiar
size is 256 MB when it is hashed to 7. It is possible to divide DNA samples show the wavelength domain assay aids the
the genomic files into at most 47 = 16384 pieces of 15 KB clustering process, authors have clustered six DNA samples
each. This method results in memory consumption and into two groups, the data has been cleansed before
increases file size. This data grid method is not useful for processing, wavelet function utilized the Haar wavelet
parallelizing biological important data. methods. The signal trend is separated from the original
Ao Li [16] proposes a genome streak learning method by records. The size of clusters may be calculated by the
simplifying Bayesian network. The nodes in Bayesian maximum distance between two points within cluster.
networks are selected as features. A feature selection Another preprocessing step is balancing the data which can
algorithm is used for structure learning. This algorithm is achieve normalization of data.
based on matriclinous algorithm. The researcher used Bilu et al., [15] propose an alignment algorithm for NP hard
dataset of 570 vertebrate streaks, including 2079 true donor alignment problem of streaks, author outperform an
sites. This approach is limited to the donor site prediction alignment procedure by sufficing optimal alignment of
and also confirms that the nucleotides closer to donor site predefined streak segments, they contemplate on choate
are the key elements in gene expression. There is need to streak rather than letters and estimate running time by
improve the structure learning method, valuable features restricting the search space of dynamic programming
and assay etc. algorithm. Authors take the aid from observation that
DNA chips [7] have main role in disease diagnosis, drug encoding streaks used in NP hard problems are not
discovery and gene identification. Elaine Garbarine [7] used necessarily depiction of protein and DNA streaks. Time
an approach to detect unique gene regions of particular expedition is calculated by taking advantage of biological
species. This artistry named information theoretic method nature of streaks antagonistic to traditional approaches that
exploits genome vocabularies to distinguish between offer good computation leading to optimal alignment; more
pathogens. This approach is useful only for finding the gene stress is given to the structure of input streaks.
streaks and most distinguished similarities between two Tuqan and Rushdi [6] propose an approach for finding the
organisms. Oligo probes were used to distinguish between complete periodicity in DNA streaks, the approach is spliced
two genes. Experiments were conducted to data from Sanger in three channels, firstly they explain the underlying
Institute. Currently 32 out of 92 bacterial pathogen contrivance for period 3 components, secondly directly
sequencing projects are completed. The author selected a relate the identification of these components for finding
pair of genomes to test algorithm. Results were shown for a nucleotide bias in codon spectrum, thirdly completely
12-mer and 25-mer Oligo pathogen probe set and confirmed characterize the DNA spectrum by a set of numerical
the Elaine Garbarine method less likely to cross-mongrelize. streaks. Authors relate the signal processing problem with
José Lousadop [12] developed a software application for genomic one through their proposed multirate DSP model,
large-scale assay of codon-triplet associations to shed new the model identifies the essential components involved in
light into this problem. This algorithm describes codon- the codon biased marinating the dual nature of problem.
triplet context biases, codon-triplet assay and identification This marvel can further help in understanding the biological
of alterations to standard matriclinous code. The method significance codon bias. The period 3 component detection
adduces an evolutionary understanding of codons within works for a kind of genes and may not be suitable for all
open reading frames (ORF). matriclinous datasets.
Gene-Split [8] is an application that shows codon triplet Ma Chan et al., [4] has shown the functionality of popular
patterns in genomes and complete sets of ORFs. Generally clustering algorithms for assay of microarray data and
this application gives opportunity to study the characteristics concluded that performance of these algorithms can be
of codon and amino acids triplets in any genome for further increased. Authors are also proposing an
extraction of hidden patterns. evolutionary algorithm for microarray data assay in which
Hua Zheng et al., [13] adduce a artistry that assimilates the there is no need for calculation of no. of clusters in advance.
low pass filter and wavelength de-noising method. The algorithm was tested with simulation and peculiar
Conventional artistries use the low pass filter with cheap datasets. The noise and missing values are a big issue in this
hardware resulting in degraded de-noising quality. By regard. The marvel is depicted by encoding the entire cluster
(IJCNS) International Journal of Computer and Network Security, 37
Vol. 1, No. 2, November 2009

grouping in a chromosome so that each gene encodes one Figure 3 depicts that preprocessed data contains only pure
cluster and each cluster contains the labels of data used in it. nucleotide base pairs without any anomalies. This rarefy
Cross over and mutations are performed suitably. The data is later fed into next ensconce for actual assay.
proposed algorithm has been observed to be slow as First we display the ORF in a nucleotide streak and find
compared to other prevailing algorithms. the start and stop codon. By using the streak indices for start
and stop, we can extract the sub-streaks and can determine
the codon distribution effectively.
3. THE ADDUCED TECHNIQUE The most informative and titillating marvel that Choate
The titillate mainly lies in finding genome regions that are process is broken into steps and each step fully performs the
responsible for protein translation. metaphorical assay relevant to DNA to protein translation.

A. SIZE OF DATASETS
1. Chimaera Monstrosa contains 18580 nucleotides of
Adenine, Guanine, Thymine and Cytosine.
Cumulative size of data becomes 37160 bytes
arranged in the form of a uni-vector.
2. Poly Odontidae contains 16512 nucleotides of
Adenine, Guanine, Thymine and Cytosine.
Cumulative size of data becomes 33024 bytes
Figure 1. Ensconced Architecture arranged in the form of a uni-vector.

B. ORF IN NUCLEOTIDE STREAKS


We have developed a ensconced architecture shown in the
Figure 1 above for this assay that starts from preprocessing It is worth noting that metaphorical assay between both
of raw data to final translation assay. For the sake we have species is being done at translation level, so this level is vital
used matriclinous datasets of Chimaera Monstrosa (rabbit in assay. We split this ensconce into three more ensconces to
fish, NC_003136) and Poly Odontidae (paddle fish, get a better benefit of this ensconced assay.
NC_004419) [18]. In each phase, our titillate lies in determining the
At preprocessing stage raw data sets are passed through a accurate start and stop position of codons that perform the
filter that outputs a more rarefy form of data which can be relative assay.
further used for actual metaphorical assay between species. C. ORF PRIMARY FRAMES
At ORF primary frame level,

Figure 2. Dataset before filter application

It is evident from Figure 2 that dataset contains characters Figure 4. ORF of Chimaera Monstrosa in Frame 1
other than pure nucleotide bases. These illegal characters
are removed by application of cleansing filter.
Figure 4 shows that start position for the first frame is at
At first exemplification it is worth noting that assay should
7156 and second at 8761. These start positions re-adduce
be made with original data values, any garbage collection
the major translation regions in entire frames. These regions
may lead to detritions of results.
are pure depiction of tri-nucleotide molecules. This process
leads towards the extraction of sub-chains that lately will be
shifted to peptide regions.

Figure 3. Pre-processed Dataset


Figure 5. ORF of Poly Odontidae in Frame 1
38 (IJCNS) International Journal of Computer and Network Security,
Vol. 1, No. 2, November 2009

Like wise we get the ORF in the second data set of Poly ORF starts from 4019, 11948 and 14328. This massive
Odontidae shown in Figure 5. The by entering the start aberration in codon compositions also provide an evidence
positions we can get stop codons. The start positions of the that first translated region lies some four thousand while
second dataset Frame 1 are 10798 to 11395, 14641 to second and third regions have jump gaps. This is the
15559. It is clear that there is an evident aberration in codon variation in translated regions in species.
regions for both frames of these species. The corresponding
translated regions are so entirely peculiar that we can not
guess even the idea of sub-channels affinity.

D. ORF SECONDARY FRAMES


At second level, we intend to find the codon positions for
Frame 2 of both species,

Figure 9. Frame 3 (Poly Odontidae)


In Figure 9, third frame for Poly Odontidae goes from 2796
to 3242, 6315 to 6722 and 12753 to 13217. Fig. 8 shows
that first 2 codon positions are relative similar while third
position again describe a jump gap. Performing
metaphorical assay this level, reveals the facts that both
matriclinous data finds a kind of extremity in behavior
Figure 6. Frame 2 (Chimaera Monstrosa) which make them relevant at certain codon composition and
peculiar at others.
Figure 6 describes that major ORF start from 2753, 5426
and 10325, this re-adduces that there are series of other F. CODON COUNT
regions occupied between first and second frame that don't The codon count describes the tri-nucleotide behavior of
contribute the peptide translation regions. streaks. We need to find the tenor of pertinency in terms of
strengths of nucleotide bases. For exemplification, we have
selected frame 1 from codon composition of both species
and compare the strength.

Figure 7. Frame 2(Poly Odontidae)

Similarly the frame 2 of Poly Odontidae shown in Figure 7


describes its codon position from 11120 to 11465 and 12464
to 12887. This shows a massive aberration in datasets at this
level as we move with increasing nucleotide sub-streaks, we
may get larger aberrations but this case does not seem to be Figure 10. Codon count (Chimaera Monstrosa in Frame 1)
true for all matriclinous datasets. This is the reason that Figure 10 re-adduces the codon count for Chimera
marvel has been given importance in selection these Monstrosa. Our aim focuses on metaphorical assay of codon
particular sets. strength at this stage. For the purpose, we need to calculate
the codon count for Poly Odontidae. Figure 11 shows the
E. ORF TERTIARY FRAMES
codon count of the first ORF of the Poly Odontidae.
Discussing the last frame set in this streak, we first find the
codon composition for these frames, for exemplification
consider

Figure 11. Codon count (Poly Odontidae in Frame 1)

G. STRENGTH OF AMINO ACID IN THE PROTEIN STREAK

At last phase of this metaphorical assay, we need to find the


relevant strength of peptide pairs in protein streaks (resulted
Figure 8. Frame 3 (Chimaera Monstrosa) as a translation from DNA to protein)
frame 3 of Chimaera Monstrosa. Figure 8 shows that major

Figure 14. Strength of amino acid (Chimaera Monstrosa)


(IJCNS) International Journal of Computer and Network Security, 39
Vol. 1, No. 2, November 2009

Acknowledgements
This work was partially supported by Research Center,
College of Computer and Information Sciences, King Saud
University Riyadh Saudi Arabia.
Figure 14 shows the strength of amino acid in Chimera
Monstrosa, now we determine the atomic decomposition and References
molecular weight of the protein [1] Ravi Gupta, Ankush Mittal, Kuldip Singh, Prateek
C: 1220 H: 1886 N: 298 O: 341 S: 12 Bajpai, Suraj and Prakash, “A Time Series Approach
Molecular weight is 2.6569e+004 for Identification of Exons and Introns”, 10th
International Conference on Information Technology
The strength of amino acid in protein streak of the Poly 2007, Page(s):91 - 93
Odontidae is depicted in Fig. 15 below, [2] Patrick Ma and C.C. Keith Chan, “Discovering Clusters
in Gene Expression Data using Evolutionary
Approach”, 15th IEEE International Conference
on Tools with Artificial Intelligence 2003, page(s): 459-
466
[3] Tejaswi Gowda, Samuel Leshner, Sarma Vrudhula and
Seungchan Kim, “Threshold logic gene regulatory
Figure 15. Strength of amino acid (Poly Odontidae) Networks”, International Workshop on Genomic Signal
Processing and Statistics 2007, page(s): 1-4, ISBN:
Similarly the atomic decomposition and molecular weight of
978-1-4244-0998-3
the protein are
C: 940 H: 1488 N: 276 O: 266 S: 14
[4] P.C.H. Ma, K.C.C. Chan, Xin Yao and D.K.Y. Chiu,
Molecular weight is 2.1360e+004 "An evolutionary clustering algorithm for gene
Comparing amino acid streaks of both species obtained from expression microarray data analysis", IEEE
the primary codon translation, we see in table 1 Transactions on Evolutionary Computation 2006,
Volume 10, Issue 3, , page(s):296 - 314
Table 1:(Amino Acid streak correlation) [5] Daniel Miranker,”Evolving Models of Biological
Amino acid Chim. Monstrosa Poly Odontidae Sequence Similarity”, First International Workshop on
C 1220 940 2008,page(s):3-9
H 1886 1488 [6] J. Tuqan and A. Rushdi, "A DSP approach for finding
N 298 276 the codon Bias in DNA Sequences", IEEE Journal of
O 341 266 Selected Topics in Signal Processing 2008, Volume 2,
Issue 3, page(s):343 - 356
S 12 14
[7] Elaine Garbarine and Gail Rosen “An information
and corresponding molecular weight in table 2
theoretic method of microarray probe design for
genome classification”, 30th Annual International
Table 2:(Molecular weight correlation) Conference of the Engineering in Medicine and
Chimaera Monstrosa Poly Odontidae Biology Society, 2008, page(s): 3779-3782
2.6569e+004 2.1360e+004 [8] P.H.-M Chang, Von-Wun Soo, Tai-Yu Chen, Wei-Shen
Lai, Shiun-Cheng Su and Yu-Ling Huang,
These results clearly describe the marvel that despite both “Automating the determination of open reading frames
species from same class differ greatly in patterns of ORF. in genomic sequences using the Web service techniques
- a case study using SARS coronavirus”, Fourth IEEE
4. Conclusion Symposium on Bioinformatics and Bioengineering
An Open Reading Frame (ORF) contains a start codon 2004, page(s):451 - 458
region. This subsequent region contains pairs of nucleotides [9] Ken-ichi Kurata, Vincent Breton and Hiroshi
in length multiple of 3 and end with a stop codon. This Nakamura, “A Method to Find Unique Sequences on
paper describes the phase wise metaphorical assay of two Distributed Genomic Databases”, IEEE/ACM
matriclinous data of species Chimaera Monstrosa and Poly International Symposium on Cluster Computing and the
Odontidae. It re-adduces an assimilated approach composed Grid 2003, 3rd Volume, page(s): 62 - 69
of step by step processes to elaborate the results effectively. [10] Nasreddine Hireche, J.M. Pierre Langlois and Gabriela
The process gives more stress on peptide translation using Nicolescu, “Survey of biological high performance
Open Reading Frame concept and data refining computing: Algorithms, Implementations and Outlook
methodology. At the end we look for all outcomes that make Research”, Canadian Conference on Electrical and
this effort optimal by performing a sensitive assay at DNA Computer Engineering 2006, page(s):1926 – 1929
to protein conversion. Variations at each step were observed
even the data classes remained same. [11] Bartkowiak, “Nonlinear Dimensionality Reduction by
Isomap and MLEdim as Applied to Amino-Acid
Distribution in Yeast ORFs”, Computer Information
40 (IJCNS) International Journal of Computer and Network Security,
Vol. 1, No. 2, November 2009

Systems and Industrial Management Application, 2008


, page(s):183 – 188
[12] José Lousado and R. Gabriela Moura, “Exploiting
codon-triplets association for genome primary structure
analysis”, International Conference on Bio-
computation, Bioinformatics, and Biomedical
Technologies 2008. , page(s):155 - 158
[13] Hua Zheng, Yan Shi, Jie Wang, Liqiang Wang and
Zukang Lu, "The Analysis on the Signals Denoising
and Single Base Pair Resolution of DNA Sequencing",
International Symposium on Biophotonics,
Nanophotonics and Metamaterials, 2006, page(s):118 -
121
[14] Binwei Weng, Guangchi Xuan, J. Kolodzey and K.E.
Barner;, "Discriminating DNA Sequences from
Terahertz Spectroscopy - A Wavelet Domain Analysis",
Proceedings of the IEEE 32nd Annual Northeast
Bioengineering Conference 2006, page(s):211 - 212

[15] Y. Bilu, P.K Agarwal and R. Kolodny, "Faster


Algorithms for Optimal Multiple Sequence Alignment
Based on Pairwise Comparisons", IEEE/ACM
Transactions on Computational Biology and
Bioinformatics 2006, Volume 3, Issue 4, page(s):408 -
422
[16] Ao Li, Tao Wang, Yun Zhou, Ming-hui Wang and
Huan-qing Feng, “An efficient structure learning
method in gene predection”, Proceedings of the
International Conference on Neural Networks and
Signal Processing, 2003, Volume 1, page(s): 567- 570
[17] Rajita Kumar, Arooshi Kumar and Sanjuli Agarwa “A
Distributed Bioinformatics Computing System for
Analysis of DNA Sequences”, IEEE proceedings of
Southeast Conference 2007, page(s):358 – 363.
[18] http://www.ncbi.nlm.nih.gov.

Author Profile
Hassan Mathkour is a professor in the department of
Computer Science. He is serving in the College of Computer
and Information Sciences King Saud University, Riyadh,
Saudi Arabia as the Vice Dean for Quality, Assurance and
Development. He completed his PhD from the University of
Iowa, USA in 1986. His research interests include
Databases, Artificial Intelligence, Bio-informatics, NLP and
Computational sciences.
(IJCNS) International Journal of Computer and Network Security, 41
Vol. 1, No. 2, November 2009

An Effective Localized Route Repair Algorithm for


Use with Unicast Routing Protocols for
Mobile Ad hoc Networks
Natarajan Meghanathan1
1
Jackson State University, Department of Computer Science,
P. O. Box 18839, Jackson, MS 39217
natarajan.meghanathan@jsums.edu

MANET routing protocols are of two types: Proactive and


Abstract: We propose an efficient and effective Localized
Route Repair (LRR) algorithm that would minimize the number Reactive. Proactive routing protocols tend to maintain routes
of flooding-based route discoveries for on-demand unicast between any pair of nodes all the time; while reactive
routing protocols in Mobile Ad hoc Networks (MANETs). The routing protocols discover routes from the source to
principle behind the LRR algorithm is that the downstream node destination only on-demand (i.e., only when required). In a
of the broken link would not have moved far away and is highly dynamically changing environment (like that of
likely to be in the 2-hop neighborhood of the upstream node of battlefields), reactive routing has been preferred over
the broken link. Accordingly, upon the failure of a link on the proactive routing as the latter involves considerable route
path from a source node to a destination node, the upstream maintenance overhead [1][2]. In this paper, we restrict
node of the broken link stitches the broken route by attempting ourselves to exploring the reactive routing strategy.
to determine a 2-hop path to the downstream node of the broken On-demand route discovery in reactive routing protocols
link. If the underlying network is connected, the proposed LRR
is often accomplished through a global flooding process in
algorithm will help to stitch a broken route without going
through network-wide flooding of the Route-Request (RREQ)
which each node will be involved in forwarding
messages or a time-consuming expanding ring route search (transmitting and receiving) the route discovery message
process. LRR can be incorporated into the route management from the source towards the destination. Frequent flooding
module for any MANET unicast routing protocol. In this paper, based route discoveries can quickly exhaust the battery
we implement LRR for the Dynamic Source Routing (DSR) charge at the nodes and also consume the network capacity
protocol – referred to as DSR-LRR. Simulation results reveal (bandwidth). Several on-demand routing protocols have
that the number of broadcast node transmissions per session for been published in the literature [1][2][3], each with a
the original DSR is 50%-70% more than that of DSR-LRR. The particular route selection metric. The most commonly used
relative increase in the hop count for DSR-LRR routes is route selection metric is the hop count. Routes with the
however within 25%. minimum hop count are preferred because the data would go
Keywords: Mobile ad hoc networks, Localized route repair, through the minimum number of intermediate forwarding
Hop count, Flooding, Route discoveries nodes, resulting in lower end-to-end delay and reduced
energy consumption per data packet transferred. But, it has
been identified that minimum hop routes are not very stable
1. Introduction (i.e., the routes do not exist for a long time) [4] and routes
have to be frequently determined through the flooding-based
A mobile ad hoc network (MANET) is a distributed route discovery procedure.
dynamic system of autonomously moving wireless devices In this paper, we propose a Localized Route-Repair (LRR)
(nodes). The wireless nodes self-organize themselves for a algorithm that would minimize the number of flooding-
limited period of time depending on the application and the based route discoveries for on-demand MANET routing
environment. As the nodes are battery-charged and protocols. With the incorporation of the proposed LRR
recharging is next to impossible, the transmission range algorithm in its route management module, a unicast
(i.e., the communication range) of the nodes is often limited. routing protocol would have to opt for a flooding-based
As a result, it may not be always possible to have point-to- route discovery only when the MANET is temporarily
point direct communication between any two nodes. A disconnected and a new route has to be determined between
wireless link is said to exist between two nodes only if the the source and destination. Otherwise, if the MANET is
two nodes are within the transmission range of each other. connected, the proposed LRR algorithm will help to fix a
Communication sessions in MANETs are often multi-hop in broken route without going through network-wide flooding
nature, involving intermediate peer nodes that co- of the Route Request (RREQ) messages and without
operatively forward data packets from the source towards requiring the intermediate nodes to determine the route all
the destination. As the topology changes dynamically, routes the way to the destination.
between the source and destination nodes of a The rest of the paper is organized as follows: In Section 2,
communication session have to be frequently reconfigured we present a motivating example to illustrate the need for
in order to continue the session.
42 (IJCNS) International Journal of Computer and Network Security,
Vol. 1, No. 2, November 2009

LRR. Section 3 describes the proposed LRR algorithm. LRR algorithm is that the downstream node of the broken
Section 4 describes the simulation environment and link would not have moved far away and is highly likely to
demonstrates the effectiveness of LRR through simulation be in the 2-hop neighborhood of the upstream node of the
results. Section 5 discusses related work and Section 6 broken link. With LRR, the broken route can be stitched
outlines some of the benefits of the proposed LRR together rapidly without having to go through an expanding
algorithm. Section 7 concludes the paper. ring route search process to the destination node.
The Local-RREQ message includes the IDs of the original
2. Motivation source node, the original destination node, the originating
intermediate node (i.e., the upstream node of the broken
Let s–a–b–c–e–d be the route discovered by a routing link), the targeted intermediate node (i.e., the downstream
protocol from source node s to destination node d through a node of the broken link) and the most recent path used from
regular flooding process. Once the route is discovered, data the originating intermediate node to the destination node.
packets are sent continuously from s to d. After certain time, The most recent path information would be useful for an
assume the intermediate nodes b and c move out of the intermediate node receiving the Local-RREQ message in
transmission range of each other, leading to the failure of order to decide how to further process the message.
link b – c on the discovered route from s to d. As of now, the In the one-hop neighborhood, if an intermediate node
MANET routing protocols handle route failures in one of receives the Local-RREQ message for the first time and it is
the following two principal ways: neither the destination node nor a downstream node on the
(i) The upstream node b of the broken link b – c attempts path towards the destination, the intermediate node simply
to find a path all the way to the destination by initiating records its ID in the Local-RREQ message and broadcasts to
an expanding ring route search [5]. The expanding ring its neighbors. All duplicate Local-RREQ messages are
route search technique attempts to locate the destination dropped. If the underlying network is connected, one or
node d in an iterative fashion by restricting the scope of more of the following would be the outcome(s) of the 2-hop
the broadcast route-request messages to 1-hop, 2-hops Local-RREQ broadcast search process:
and so on up to a pre-determined hop count value (i) If the Local-RREQ message is received by the original
configured for the routing protocol. If a route to the destination node of the route (i.e., the destination has
destination is determined within any of the route search moved within the 2-hop neighborhood of the
attempts, the intermediate node continues to forward originating intermediate node of the Local-RREQ
the data packets on the newly discovered route to the message), then the destination node sends back a Local
destination. The source node is completely unaware of Route-Reply (Local-RREP) message to the originating
the change in the route to the destination. intermediate node of the Local-RREQ message either
(ii) The upstream node b of the broken link b – c through a direct path or through a 2-hop path,
immediately notifies the source node s about the failure whichever is appropriate. Referring back to our
in the route and stops from initiating any expanding example in Section 2, if the most recent path used is s–
ring route search. The source node launches a network- a–b–c–e–d and b – c is the broken link, the new path
wide flooding of the RREQ messages. would be either s–a–b–d or s–a–b–g–d (refer Figure 1)
depending on whether the destination node d is 1-hop
The first strategy of expanding ring route search may be or 2-hops away from the originating intermediate node,
efficient if the destination can be located within the vicinity node b.
of the upstream node of the broken link. Otherwise, the
route search has to be slowly expanded up to a certain hop
count value and this would incur a considerable amount of
route-management overhead (in terms of the number of
localized Route Request messages broadcast) as well as
larger route acquisition latency. The second strategy of
immediately notifying the source node about the route
failure can trigger frequent network-wide flooding, which
would also generate significant control overhead (number of
RREQ messages broadcast).

3. Localized Route Repair (LRR) Algorithm


In this paper, we propose the Localized Route Repair (LRR)
algorithm that basically works as follows: Upon the failure
of a link on the path from a source to destination, the
Figure 1. Possible Outcome of the LRR Algorithm
upstream node of the broken link stitches the broken route
[Destination Node d sends LRR-RREP]
by attempting to determine a 2-hop path to the downstream
node of the broken link. In this pursuit, the upstream node
(ii) If an intermediate node located further downstream on
of the broken link initiates a Local-RREQ message
the recently used source-destination (s-d) path receives
broadcast process that is restricted for propagation only
the Local-RREQ message, then that node sends a Local-
within its 2-hop neighborhood. The main idea behind the
(IJCNS) International Journal of Computer and Network Security, 43
Vol. 1, No. 2, November 2009

RREP message back to the originating node of the RREP message received from the original destination node
Local-RREQ message. In our example, if the of the path is the most preferred. Otherwise (i.e., no Local-
intermediate node e located further downstream on the RREP message is received from the destination node), if one
recently used path s–a–b–c–e–d receives the Local- or more intermediate nodes downstream on the path towards
RREQ message directly from node b, then node e sends the destination send the Local-RREP messages, the Local-
a Local-RREP message to node b. A new path s–a–b–e– RREP message received from an intermediate node that lies
d with a reduced hop count has been thus learnt from on the shortest path from the source (of course, through the
the source s to destination d (refer Figure 2). originating intermediate node of the Local-RREQ message)
(iii) If the Local-RREQ message is received by the targeted to the destination is preferred.
downstream node for which the 2-hop broadcast route The LRR algorithm can be executed for every link failure
search was primarily initiated, the targeted node on a route from the source node to the destination node.
responds back with a Local-RREP message through an LRR can be applied even if more than one link fails on a
intermediate node from which the Local-RREQ path from the source to the destination (i.e., once for each
message was received. In our example, node c would link failure). If the underlying network is disconnected, then
respond through an intermediate node, say node f, and the originating intermediate node of the Local-RREQ
the new path learnt from the source to the destination message fails to stitch the broken route (i.e., could not find a
would be s–a–b–f–c–e–d (refer Figure 3). Note that this path either to the destination or to any of the downstream
new path has a hop count that is one more than that of nodes). In such a scenario, the intermediate node sends a
the previously used path s–a–b–c–e–d. Local-RERR (Local Route Error) message to the source
node indicating the failure to stitch the broken route. The
source node then initiates a network-wide flooding of the
RREQ messages to discover a route from the source to the
destination.

4. Simulation Environment and Results


In this paper, we incorporate LRR into the route
management module of the Dynamic Source Routing (DSR)
protocol, one of the widely used classic MANET routing
protocols. The optimized version of DSR using the LRR
algorithm is referred to as DSR-LRR.
The performance and the potential benefits obtained using
the proposed LRR algorithm has been evaluated through
simulations conducted in the ns-2 simulator (version 2.28)
[6]. We implemented the LRR module in ns-2. We used the
Figure 2. Possible Outcome of the LRR Algorithm
implementation of DSR that comes with ns-2 and
[Intermediate Node e sends LRR-RREP]
incorporated the developed LRR module into the DSR route
management module to obtain the optimized DSR-LRR
protocol. In order to explore the maximum possible
performance gain obtainable with the LRR mechanism, we
disabled promiscuous listening and the default route cache
and maintenance mechanism in DSR-LRR. Simulations
have been conducted in a square network of dimensions
1000m x 1000m. The transmission range of each node is
250m. The density of the network is varied by conducting
the simulations with 50 nodes (representing a low network
density – on the average of 10 neighbors per node) and 100
nodes (representing a high network density – on the average
of 20 neighbors per node). We did not face much limitations
in our implementations other than that the ns-2 simulator
was not scalable enough to conduct simulations for a larger
number of nodes, beyond 100.
The node mobility model used is the Random Waypoint
model [7] commonly used in MANET simulation studies.
Figure 3. Possible Outcome of the LRR Algorithm According to this model, the nodes are initially uniform-
[Targeted Downstream Node c sends LRR-RREP] randomly distributed throughout the network. Each node
moves independent of the other nodes in the network. Each
It could be possible that all of the above three scenarios node selects a random target location to move and moves to
could co-exist and an intermediate node generating the the selected location with a velocity uniform-randomly
Local-RREQ message may receive Local-RREP messages selected from the interval [0,…, vmax]. After reaching the
through each of the three means. In such a case, the Local- targeted location, the node again selects a new location to
44 (IJCNS) International Journal of Computer and Network Security,
Vol. 1, No. 2, November 2009

move and with a new randomly chosen velocity from the


interval [0,…, vmax]. The values of vmax used in our
simulations are 10m/s, 30m/s and 50m/s representing low,
moderate and high node mobility scenarios respectively.
The performance results illustrated in Figures 4 through 9
are average values obtained for 15 source-destination (s-d)
pairs run against 5 different node mobility profiles for every
combination of node mobility (vmax) and network density
values. The packet sending rate for each s-d session is 4 data
packets per second, over a simulation time period of 1000
seconds. Figure 5. Average Hop Count per Path (100 Nodes)

4.1 Performance Metrics 4.3 Average Number of Broadcast Node


The two main performance metrics measured are the Transmissions per Session
average hop count per path and the average number of For a given node mobility, the average number of broadcast
broadcast node transmissions per s-d session. The average node transmissions per session for DSR is 50%-60% and
hop count per path is the time-averaged hop count of all the 60%-70% more than that of DSR-LRR in low-density and
s-d paths used for the entire communication session. For high-density networks respectively. For a given network
example, if the communication session lasted for 10 seconds density, the difference between the number of broadcast
using a 3-hop path for the first 4 seconds, a 2-hop path for node transmissions incurred for DSR and DSR-LRR,
the next 2 seconds and a 4-hop path for the next 4 seconds, increases as node mobility increases. Thus, the LRR
then the time-averaged hop count is [3*4 + 2*2 + 4*4] / algorithm is very effective in reducing the control message
[4+2+4] = 3.2. The average number of broadcast node overhead in the network and helps to optimize the usage of
transmissions per s-d session is a measure of the control critical resources like energy and bandwidth.
message overhead incurred by the routing protocols and is
computed here as the sum of all the RREQ and/or Local-
RREQ messages broadcast by the nodes in the network for
the entire simulation time period, averaged over all the s-d
sessions. The performance results illustrate a tradeoff
between the above two performance metrics.

Figure 6. Number of Broadcast Node


Transmissions per Session (50 Node Network)

Figure 4. Average Hop Count per Path (50 Nodes)

4.2 Average Hop Count per Path


As expected, the routes used by DSR-LRR have a larger hop
count than those used by the DSR protocol. However, the
difference in the hop count is only within 15%-18% for low- Figure 7. Number of Broadcast Node
density networks and 21%-23% for high-density networks.
Both DSR and DSR-LRR incur a lower hop count in high- Transmissions per Session (100 Node Network)
density networks compared to those incurred in low-density
networks, especially at low and moderate node mobility. In high-density networks, for a given node mobility
This could be attributed to the availability of an increased condition, both DSR and DSR-LRR incur a larger number
number of nodes to choose as the next hop node in high- of broadcast node transmissions per session. This could be
density networks. At low and moderate node mobility attributed to two factors: (i) As the size of the neighborhood
conditions, the average hop count per path in low-density is doubled, more nodes receive a RREQ/ Local-RREQ
networks is 9%-15% and 7%-12% more than that obtained message and broadcast the message to their neighbors; (ii)
in high-density networks. At high node mobility conditions, The minimum hop routes of DSR and DSR-LRR are more
there is no significant difference in the hop count values likely to be relatively less stable in high-density networks
obtained for each of these routing protocols in low and high- compared to those determined in low-density networks. Both
density networks.
(IJCNS) International Journal of Computer and Network Security, 45
Vol. 1, No. 2, November 2009

the routing protocols tend to minimize the number of 6. Benefits of the LRR Algorithm
intermediate nodes between the source and destination
nodes of a route and as a result attempt to choose routes that The proposed LRR algorithm is not protocol-specific and it
have a larger physical distance between the upstream and can be incorporated into the route management module of
downstream nodes of the constituent links of the routes. For any unicast routing protocol. The LRR algorithm will meet
both DSR and DSR-LRR, the number of broadcast node the energy, throughput and Quality of Service (QoS)
transmissions incurred in high-density networks is 100% (at requirements for communication in resource-constrained
low node mobility)-150% (at high node mobility) more than environments, typical of MANETs. The two-hop route
that incurred in low-density networks. repair technique of LRR can be very effective in speeding up
As we increase the maximum node velocity value from the communication between the different wireless devices in
10m/s to 30m/s (i.e., by a factor of 3), the average number of a dynamically changing mobile environment. As a broken
broadcast node transmissions increases by a factor of 2.5 route is more likely to be stitched quickly using the proposed
and 2.3 in low-density and high-density networks LRR algorithm, data is more likely to reach the destination
respectively. Similarly, as we increase the maximum node nodes within a limited amount of time – thus, providing
velocity value from 10m/s to 50m/s (i.e., by a factor of 5), users with a desired QoS. Because of the tendency to reduce
the average number of broadcast node transmissions the number of flooding-based route discoveries, the
increases by a factor of 3.3 and 4.2 in low-density and high- proposed LRR algorithm helps to conserve energy and
density networks respectively. bandwidth in the network. With limited number of flooding-
based route discoveries, the available network capacity is
also enhanced. There can be more simultaneous
communications in the network.
5. Related Work
A local route recovery mechanism based on the Expanding
Ring Search was explored in [8] for improving TCP 7. Conclusions and Future Work
performance in multi-hop wireless ad hoc networks.
Simulation results indicate with the application of the local- The high-level contribution of this paper is the development
recovery techniques, the end-to-end delay per data packet (a of an effective and efficient Localized Route-Repair (LRR)
measure of the hop count per path) for DSR increased as algorithm for use with MANET unicast routing protocols.
large as by 60%. The Witness-Aided Routing (WAR) We illustrate the effectiveness of LRR by developing an
protocol [9] attempts to overcome link failures by locally optimized version of DSR (referred to as DSR-LRR) that has
broadcasting the data packets within predefined hop limits. the LRR algorithm incorporated in its route management
Even though there can be fast local recovery, this approach module. Simulation study illustrates a potential tradeoff
of broadcasting the data packets itself as the recovery between the control message overhead incurred (measured
packets leads to significantly high control overhead. as the number of broadcast node transmissions) and the hop
The Associativity-based Routing (ABR) protocol [10] count per path. We observe that the number of broadcast
attempts to locally fix a broken route if the upstream node of node transmissions of the control messages incurred by DSR
the broken link is located in the second half of the route can be 50%-70% times more than that incurred by DSR-
(i.e., the node is closer to the destination than to the source). LRR; on the other hand, the average hop count per path for
The upstream node of the broken link broadcasts a local DSR-LRR can be only as large as 25% more than that of
route request that will propagate with a hop limit equal to DSR. Hence, the tradeoff is not evenly balanced. Thus,
the remaining number of hops to the destination in the DSR-LRR can be a good replacement for DSR in resource-
broken route. Only the destination could respond for the constrained environments where minimum hop count is not
local route request. If the route to the destination cannot be a critical factor. We conjecture to obtain similar results
determined, the host preceding the upstream node of the when LRR is incorporated into the route management
broken link is notified and the above local route discovery modules of several existing MANET unicast routing
procedure is recursively repeated until the error message protocols.
reaches a node that is in the first half of the broken route. At As future work, we will apply the LRR technique to other
this time, the source node is notified about the route failure unicast routing protocols such as the stability-based Flow-
and it initiates a global flooding-based route discovery. This Oriented Routing Protocol (FORP) and the position-based
approach of recursive route repair will significantly Location-Aided Routing (LAR) protocol for MANETs. We
consume the bandwidth and lead to longer delay. will also study the performance of the LRR optimized
A local route repair mechanism for the Ad hoc On- routing protocols under different mobility models for ad hoc
demand Distance Vector (AODV) [5] routing protocol has networks [12].
been proposed in [11]. Here, the upstream node of the
broken link attempts to locally fix the broken route by trying References
to determine a route to the node which is the next hop node [1] D. Broch, D. A. Maltz, D. B. Johnson, Y-C. Hu and J.
to the downstream node of the broken link. However, this Jetcheva, “A Performance Comparison of Multi-hop
approach requires nodes to keep track of the two-hop nodes Wireless Ad hoc Network Routing Protocols,”
(in addition to the next hop node) for the path towards every Proceedings of the 4th Annual ACM/IEEE International
destination node. This can significantly increase the storage Conference on Mobile Computing and Networking,
complexity of the routing tables at the nodes.
46 (IJCNS) International Journal of Computer and Network Security,
Vol. 1, No. 2, November 2009

Dallas, TX, October 25 – 30, pp. 85 – 97, New York, Networks with Unidirectional Links,” Proceedings of
NY, USA, 1998. the 1st International Conference on Mobile Data Access,
[2] P. Johansson, T. Larsson, N. Hedman, B. Mielczarek pp. 24-33, Hong Kong, December 1999.
and M. Degermark, “Scenario-based Performance [10] C-K. Toh, “Associativity-Based Routing for Ad hoc
Analysis of Routing Protocols for Mobile Ad hoc Mobile Networks,” IEEE Personal Communications,
Networks,” Proceedings of the 5th Annual International Vol. 4, No. 2, pp. 103 – 139, March 1997.
Conference on Mobile Computing and Networking, [11] X. Bai-Long, G. Wei, L. Jun and Z. Si-Lu, “An
Seattle, WA, USA, August 15 – 20, pp. 195 – 206, New Improvement for Local Route Repair in Mobile Ad hoc
York, NY, USA, 1999. Networks,” Proceedings of the 6th International
[3] N. Meghanathan and A. Farago, UTDCS-40-04 (2004) Conference on ITS Telecommunications, pp. 691 – 694,
Survey and Taxonomy of 55 Unicast Routing Protocols June 2006.
for Mobile Ad hoc Networks, The University of Texas [12] T. Camp, J. Boleng and V. Davies, “A Survey of
at Dallas, Richardson, TX. Mobility Models for Ad Hoc Network Research,”
[4] N. Meghanathan, “Exploring the Stability-Energy Wireless Communication and Mobile Computing, Vol.
Consumption-Delay-Network Lifetime Tradeoff of 2, No. 5, pp. 483-502, September 2002.
Mobile Ad hoc Network Routing Protocols,” Academy
Publisher Journal of Networks, Vol. 3, No. 2, pp. 17 –
28, 2008. Author Profile
[5] C. Perkins, E. B. Royer and S. Das, “Ad hoc On-
Demand Distance Vector (AODV) Routing,” IETF RFC Natarajan Meghanathan is currently working as Assistant
3561, July 2003. Professor of Computer Science at Jackson State University,
[6] K. Fall and K. Varadhan, NS-2 Notes and Mississippi, USA, since August 2005. Dr. Meghanathan received
Documentation, The VINT Project at LBL, Xerox his MS and PhD in Computer Science from Auburn University, AL
PARC, UCB, and USC/ISI, and The University of Texas at Dallas in August 2002 and May
2005 respectively. Dr. Meghanathan’s main area of research is ad
http://www.isis.edu/nsnam/ns, August 2001.
hoc networks. He has more than 45 peer-reviewed publications in
[7] C. Bettstetter, H. Hartenstein and X. Perez-Costa, leading international journals and conferences in this area. Dr.
“Stochastic Properties of the Random-Way Point Meghanathan has recently received grants from the Army Research
Mobility Model,” Wireless Networks, pp. 555-567, Vol. Laboratory (ARL) and National Science Foundation (NSF). He
10, No. 5, September 2004. serves as the editor of a number of international journals and also
[8] Z. Li and Y.-K. Kwok, “Local Route Recovery in the program committee and organization committees of several
Algorithms for Improving Multi-hop TCP Performance leading international conferences in the area of networks. Besides
in Ad hoc Wireless Networks,” Lecture Notes in ad hoc networks, Dr. Meghanathan is currently conducting
Computer Science, vol. 3149, pp. 925-932, December research in Bioinformatics, Network Security, Distributed Systems
and Graph Theory.
2004.
[9] I. D. Aron, S. K. S. Gupta and E. K. S. Gupta, “A
Witness-Aided Routing Protocol for Mobile Ad hoc
(IJCNS) International Journal of Computer and Network Security, 47
Vol. 1, No. 2, November 2009

Selection of Proper Activation Functions in Back-


propagation neural networks algorithm for
Transformer Internal Fault Locations
A. Ngaopitakkul and A. Kunakorn
Faculty of Engineering
King Mongkut’s Institute of Technology Ladkrabang, Bangkok 10520, Thailand
knatthap@ kmitl.ac.th

the decision algorithm are investigated.


Abstract: This paper presents an analysis on the selection of
an appropriate activation used in neural networks for locating The decision algorithm is a part of a transformer
the internal fault locations of a two-winding three-phase protective scheme proposed in this paper. The structure of
transformer. A decision algorithm based on a combination of the protective scheme is shown in Figure 1. The
Discrete Wavelet Transforms and neural networks is developed. simulations, analysis and diagnosis are performed using
Fault conditions of the transformer are simulated using ATP/EMTP and MATLAB on a PC Pentium IV 2.4 GHz
ATP/EMTP in order to obtain current signals. The training 512 MB. It is noted that the discrete wavelet transform is
process for the neural network and fault diagnosis decision are employed in extracting the high frequency component
implemented using toolboxes on MATLAB/Simulink. Various contained in the internal fault currents of a transformer. The
activation functions in hidden layers and output layers are construction of the decision algorithm is detailed and
compared in order to find and to select the best activation implemented with various case studies based on Thailand
function for locating the position of internal faults of the
electricity transmission and distribution systems.
winding transformer for the winding to ground faults. It is
found that the use of Hyperbolic tangent-function for the hidden CT Y Y CT
layers, and Linear activation function for the output layer gives
the most satisfactory accuracy in these particular case studies.
Keywords: Internal faults, Discrete Wavelet Transforms, Analogue Modal Calculating WT Detail
input module mixing unit Differential current Filter (scale 1-5)
Back-propagation neural network, Transformer windings

1. Introduction Fault
position
Decision
Logic unit
Calculating by
Weight & bias of BP
Start
Detection

During the course of recent years, the development of


Decision Comparison
fault diagnosis techniques for the power transformer has Trip signal
making unit Coefficient

been progressed with the applications of wavelet transform


and artificial neural networks [1,2,3,4,5]. Many research Figure 1. The transformer protective scheme
reports have paid consideration in effects of the magnetizing
inrush current as well as the discrimination between
magnetizing inrush current and internal faults [2,3,5]. It is
2. Simulation
very useful for electrical engineers if the fault positions
along transformer windings can be detected. Therefore, a 2.1 Transformer winding models
decision algorithm used to locate the fault position along the For a computer model of a two-winding three-phase
winding in order to decrease complexity and duration of transformer having primary and secondary windings in each
maintenance time is required. Neural networks have been phase, BCTRAN is a well-known subroutine on ATP/EMTP.
employed in the development of such an algorithm, and To study internal faults of the transformer, Bastard et al
proved to be a powerful tool in fault detection as well as proposed modification of the BCTRAN subroutine. Normally,
classification [2,3]. the BCTRAN uses a matrix of inductances with a size of 6x6
The activation function is a key factor in the artificial to represent a transformer, but with the internal fault
neural network structure. Back-propagation neural networks conditions the matrix is adjusted to be a size of 7x7 for
support a wide range of activation functions such as sigmoid winding to ground faults and of 8x8 for interturn faults [6]. In
function and linear function etc. The choice of activation the research work of Bastard et al [6], the model was proved to
function can change the behavior of the back-propagation be validate and accurate due to a comparison with
neural network considerably. There is no theoretical reason measurement results. However, the effects of high frequency
for selecting a proper activation function. Hence, the components which may occur during the faults are not
objective of this paper is to consider studies of an included in such a model. Islam and Ledwich [7] described
appropriate activation function for the algorithm used in the the characteristics of high frequency responses of a
detection of internal fault locations along transformer transformer due to various faults. It has been shown that the
windings. The activation functions in each hidden layers fault types and fault locations have an influence on the
and output layer are varied, and the results obtained from frequency responses of the transformer [7]. In addition, it has
48 (IJCNS) International Journal of Computer and Network Security,
Vol. 1, No. 2, November 2009

been proved that transient based protections using high The process for simulating winding to ground faults based
frequency components in fault currents can be applicable in on the BCTRAN routine of EMTP, can be summarized as
locating and classifying faults on transmission lines [8-9]. It follows:
is, therefore, useful to investigate the high frequency 1st step: Compute matrices [R] and [L] of the power
components superimposed on the fault current signals for a transformer from manufacture test data [11] without
development of a transient based protection for a transformer. considering the winding to ground faults [6].
As a result, in this paper the combination of the transformer
models proposed by Bastard et al [6] as shown in Figure 2, R 1 L 0 
with the high frequency model including capacitances of the [R ] =  M O M  (1)
transformer recommended by IEEE working group [10] as
 0 L R 6 
shown in Figure 3., are used for simulations of internal faults
a t t h e t r a n s f o r m e r w i n d i n g s .
 L 1 L 12 L L 16 
 
L L 26 
[L ] =  21 2
L L
a (2)
M M O M 
 
b
Phase A L 61 L 62 L L 6 
2
2nd step: Modify Equations 5 and 6 to obtain the new
internal winding fault matrices [R ]* and [L ]* as illustrated
in Equations 3-4 [6].
Phase B
 Ra 0 0 0 0 0 0
3 4 0
 Rb 0 0 0 0 0 
0 0 R2 0 0 0 0
[R] =  0

0 0 R3 0 0 0
 (3)

Phase C 0 0 0 0 R4 0 0
 
0 0 0 0 0 R5 0
5 6 0
 0 0 0 0 0 R6 
Primary Secondary
 La M ab M a2 M a3 M a4 M a5 M a6 
M Lb M b2 M b3 M b4 M b5 M b6 
 ba
Figure 2. The modification on ATP/EMTP model for a  M 2a M 2b L2 M 23 M 24 M 25 M 26 
(4)
three-phase transformer with internal faults. [L]∗ =  M 3a M 3b M 32 L3 M 34 M 35 M 36 

Chl  M 4a M 4b M 42 M 43 L4 M 45 M 46 
 
 M 5a M 5b M 52 M 53 M 54 L5 M 56 
M M 6b M 62 M 63 M 64 M 65 L6 
 6a

Chg Clg
3rd step: The inter-winding capacitances and earth
capacitances of the HV and LV windings can be simulated
by adding lumped capacitances connected to the terminals of
the transformer.

2.2 Power System simulation using EMTP


Primary 115/23 kV Secondary
50 MVA A 50 MVA, 115/23 kV two-winding three-phase
transformer was employed in simulations with all
Figure 3. A two-winding transformer with the effects of parameters and configuration provided by a manufacturer
stray capacitances. [11]. The scheme under investigations is a part of Thailand
electricity transmission and distribution system as depicted
The capacitances shown in Figure 3 are as follows: in Figure 4. It can be seen that the transformer as a step
Chg = stray capacitance between the high voltage winding down transformer is connected between two sub-
and ground transmission sections. To implement the transformer model,
Clg = stray capacitance between the low voltage winding simulations were performed with various changes in system
and ground parameters as follows:
Chl = stray capacitance between the high voltage winding - The angles on phase A voltage waveform for the
and the low voltage winding. instants of fault inception were 0o-330o (each step is 30°).
- Internal faults type at the transformer windings (both
primary and secondary) which is winding to ground faults
was investigated.
(IJCNS) International Journal of Computer and Network Security, 49
Vol. 1, No. 2, November 2009

- The fault position were designated on any phases of the primary current and the secondary current in all three
transformer windings (both primary and secondary), was phases as well as the zero sequence, are calculated, and the
varied at the length of 10%, 20%, 30%, 40%, 50%, 60%, resultant current signals are extracted using the Wavelet
70%, 80% and 90% measured from the line end of the transform. The coefficients of the signals obtained from the
windings. Wavelet transform are squared for a more explicit
- Fault resistance was 5 Ω. comparison. Figure 7. illustrates an example of an
extraction using Wavelet transform for the differential
currents and zero sequence current from scale1 to scale 5 for
a case of phase A to ground fault at 10% in length of the
high voltage winding while case of phase A to ground fault
at 10% in length of the low voltage winding as shown in
Figure 8.

Figure 4. The system used in simulations studies [12].

The primary and secondary current waveforms, then, can


be simulated using ATP/EMTP, and these waveforms are
interfaced to MATLAB/Simulink for a construction of fault
diagnosis process. Figure 5. illustrates an example of phase
A to ground fault at 10% in length of the high voltage
winding while phase A to ground fault occurred at 10% in
length of the low voltage winding as shown in Figure 6.

Figure 7. Wavelet transform of differential currents


(Winding to ground fault at 10% in length of the high voltage winding)

Figure 5. Primary and secondary currents for a case of


phase A to ground fault at 10% in length of the high voltage
winding.

Figure 8. Wavelet transform of differential currents


(Winding to ground fault at 10% in length of the low voltage winding)

3. Fault Detection Algorithm


After applying the Wavelet transform to the differential
currents, the comparison of the coefficients from each scale
is considered. Wavelet transform is applied to the quarter
cycle of current waveforms after the fault inception. With
several trial and error processes, the decision algorithm on
the basis of computer programming technique is
Figure 6. Primary and secondary currents for a case of constructed. The most appropriate algorithm for the decision
phase A to ground fault at 10% in length of the low voltage with all results from the case studies of the system under the
winding. investigations can be concluded as Figure 9.
where,
With fault signals obtained from the simulations, the scale = indicator scale of DWT which considered for
differential currents, which are a deduction between the detecting fault
50 (IJCNS) International Journal of Computer and Network Security,
Vol. 1, No. 2, November 2009

(t + 5 µs ) = coefficient from Wavelet transform for the


X diff By performing many simulations, it has been found that
when applying the previously detailed algorithm for
differential current detected from phase X at the time of
detecting internal faults at the transformer winding, the
t+5µs coefficient in scale 1 (50-100 kHz) from DWT seems
max(0 → t) = coefficient from Wavelet transform for the
X diff enough to indicate the internal fault inception of the
differential current detected from phase X at the time from t transformer. As a result, it is unnecessary to use other
=0 to t = t+5µs coefficients from higher scales in this algorithm, and the
coefficients in scale 1 from DWT are used in training
chk = comparison indicator for a change in coefficient
X diff
processes for the neural networks later.
diff diff diff
from Wavelet transform ( A check , B check , C check ), used for
separation between normal conditions and faults 4. Neural Network Decision Algorithm
t1 = ٥ µsec (depending on the sampling time used in From the simulated signals, DWT is applied to the
ATP/EMTP) quarter cycle of differential current waveforms after the fault
inception. The coefficients of scale 1 obtained using the
Start
wavelet transforms are used for training and test processes
of the BPNN. In this paper, a structure of a back
Differential Current Signal propagation neural network consists of three layers which
1. All three phases
2. Zero sequence are an input layer, two hidden layers and an output layer as
shown in Figure 10. Each layer is connected with weights
and bias. In addition, the activation function is a key factor
Wavelet scale 1-5
in the artificial neural network structure. The choice of
activation function can change the behavior of the back-
Differential current in each
propagation neural network considerably. Hence, the
phases activation functions in each hidden layers and output layer
are varied as illustrate in Table 1 in order to select the best
activation function for locating the positions along the
For scale = 1
transformer windings due to winding to ground faults of a
two-winding transformer.

diff
X chk =0 TABLE 1: Activation functions in all hidden layers and
output layers for training neural networks
For t = 5 us : 100 ms
Activation function in
diff
diff
X chk = 0 and find X max(0 → t) )
first hidden layer second hidden layer output layer

t = t + 5us
Linear function
Logistic
No Logistic sigmoid
Hyperbolic sigmoid function
t = 100 ms
(t + 5us ) ≥ 5 * X max(0→ t) )
(X diff diff Hyperbolic Tangent sigmoid
No Tangent sigmoid
function Linear function
Hyperbolic
Yes Yes Tangent sigmoid function Logistic sigmoid
Hyperbolic Tangent sigmoid
diff
X chk = 1 and record time (t+5us) Linear function
Logistic
sigmoid function Logistic sigmoid
Logistic Hyperbolic Tangent sigmoid
sigmoid function Linear function
Hyperbolic
Tangent sigmoid function Logistic sigmoid
No Hyperbolic Tangent sigmoid
X diff
chk =1 scale = scale+1

Yes
A training process was performed using neural network
toolboxes in MATLAB. It can be divided into three parts as
No
follows [13]:
diff
Sum of X chk ≥2 1 The feedforward input pattern, which has a propagation
of data from the input layer to the hidden layer and finally to
Yes the output layer for calculating responses from input
Normal
patterns illustrated in Equations 5 and 6.
Fault condition

(lw ( ) )
condition

Figure 9. Flowchart for detecting the phase with a fault a2 = f 2 2,1


* f 1 iw1,1 * p + b1 + b 2 , (5)
condition (
o / p ANN = f 3 lw3,2 * a 2 + b 3 . ) (6)
(IJCNS) International Journal of Computer and Network Security, 51
Vol. 1, No. 2, November 2009

Input Layer 1st Hidden Layer 2 nd Hidden Layer Output Layer

2 ,1 3, 2
iw11,,11 n11 a11 lw1,1 n12 a12 lw1,1 n13 a13
P1
∑ f1 ∑ f 2
∑ f3

b11 b12 b13


1 1 1
P2
n 12 a 12 n22 a22 n23 a23
∑ f1 ∑ f 2
∑ f3
P3
• b22 • b23 •
• • b 12 •
• •

• •

• •




1 • • 1 • • 1 • •

n1S1 a 1S 1 nS22 a S2 2 nS3 3 a 3S 3


PR1 ∑ f1 ∑ f 2
∑ f 3
iw1S,1,R lwS2,21, S1 lwS3,32,S 2
b1S1 bS22 bS33
1 1 1

Figure 10. Back-propagation with two hidden layers

where, repeating the cycle of the training process. The training


p = input vector of BPNN procedure was stopped when reaching the final number of
iw1,1 = weights between input and the first hidden layer neurons for the first hidden layer or the MAPE of test sets
lw2,1 = weights between the first and the second hidden was less than 0.5%. The training process can be
layers summarized as Figure 11.
lw3,2 = weights between the second hidden layer and output
layers
b1, b2= bias in the first and the second hidden layers
respectively
b3 = bias in output layers
f1, f2 = activation function (Hyperbolic tangent sigmoid
function : tanh)
f3 = activation function (Linear function)

2 The back-propagation for the associated error between


outputs of neural networks and target outputs; The error is
fed to all neurons in the next lower layer, and also used as
an adjustment of weights and bias.
3 The adjustment of the weights and bias by Levenberg-
Marquardt (trainlm). This process is aimed at trying to
match between the calculated outputs and the target outputs.
Mean absolute percentage error (MAPE) as an index for
efficiency determination of the back-propagation neural
networks is computed in Equation 7.
1 n o / p ANNi − o / pTARGETi
MAPE = * ∑ * 100% (7)
n i =1 o / pTARGETi
where,
n = number of test sets

As a result, a structure of the back propagation neural


network consists of 4 neuron inputs, two hidden layers and 1
neuron output. The inputs pattern are the maximum
coefficients details (cD1) in scale 1 at ¼ cycle of phase A, B,
C and zero sequence for post-fault differential currents as
mentioned in the previous section. The output variables of
the neural networks are designated as values range 0.1 to
0.9 which corresponding to length of the winding that fault
occurs.
In this training process, a number of neurons in both
hidden layers were increased as well as varying the
activation functions in all hidden layers and the output layer
in order to select the best performance. Input data sets are
normalized and divided into 216 sets for training and 108
sets for tests. During the training process, the weight and Figure 11. Flowchart for the training process.
biases were adjusted, and there were 20,000 iterations in
order to compute the best value of MAPE. The number of
neurons in both hidden layers was increased before
52 (IJCNS) International Journal of Computer and Network Security,
Vol. 1, No. 2, November 2009

Figure 12. Comparison of average error for fault position at various activation functions between each transformer
windings.

Table 2 : Average error of test sets for locating of fault


Activation function in High voltage winding Low voltage winding
Maximum Minimum Average Maximum Minimum Average
the first hidden layer the second hidden layer the output layer
error error error error error error
Hyperbolic tangent Logistic sigmoid function Linear function 0.0414 0.0000 0.0099 0.1693 0.0000 0.0309
sigmoid function Hyperbolic tangent sigmoid function Linear function 0.0322 0.0001 0.0089 0.0621 0.0001 0.0211
Logistic sigmoid Logistic sigmoid function Linear function 0.0497 0.0000 0.0094 0.1759 0.0000 0.0307
function Hyperbolic tangent sigmoid function Linear function 0.0483 0.0001 0.0098 0.1709 0.0006 0.0377

After the training process, the algorithm was employed in When the training process was completed, the algorithm
order to locate fault positions in the winding transformer. was implemented to locate fault positions due to winding to
Case studies were varied so that the algorithm capability can ground faults along the transformer windings. The results
be verified. Case studies were performed with various types obtained from the algorithm proposed in this paper are
of fault at each position in the transformer. The total shown in Table 2. It can be seen that the accuracy from
number of the case studies was 216. The result obtained Hyperbolic tangent – Hyperbolic tangent – Linear activation
from various activation functions of test set both high function case is highly satisfactory.
voltage and low voltage winding as shown in Figure 12. From Figure 13, the comparison of average error at
From Figure 12, it can be seen that there are four cases various lengths of the winding among four case activation
activation functions with average error less than 5% as functions is shown. It can be seen that the average error of
follows: fault locations from the high voltage winding is 2.5% while
- Hyperbolic tangent – Logistic – Linear. the average error of low voltage winding is 6% at various
- Hyperbolic tangent – Hyperbolic tangent – Linear. lengths of the transformer winding.
- Logistic – Logistic – Linear.
- Logistic – Hyperbolic tangent – Linear.
(IJCNS) International Journal of Computer and Network Security, 53
Vol. 1, No. 2, November 2009

(a) High Voltage (b) Low Voltage


Figure 13. Comparison of average errors for fault positions at various lengths of windings among various activation
functions.

(a) High Voltage (b) Low Voltage


Figure 14. Comparison of average error for fault position at various lengths of the winding among phases that fault occur
with Hyperbolic tangent – Hyperbolic tangent – Linear are activation functions in each layers.
54 (IJCNS) International Journal of Computer and Network Security,
Vol. 1, No. 2, November 2009

From Fig. 14, it can be seen that Hyperbolic tangent –


Hyperbolic tangent – Linear as activation function in each 5. Conclusion
layers, are tested with various fault types on both high
voltage and low voltage windings of the three-phase In this paper, Studies of an appropriate activation
transformer, the accuracy of fault locations from the function for the decision algorithm used in the detection of
prediction of the decision algorithm is highly satisfactory as internal fault locations along transformer winding have been
well as Table 3-6. discussed. The maximum coefficient from the first scale at
¼ cycle of phase A, B, and C of post-fault differential
Table 3 : Results of phase A to ground fault at high voltage current signals and zero sequence current obtained by the
winding with various inception angles. wavelet transform have been used as an input for the
(Fault position at 10% in length of the winding)
training process of a neural network in a decision algorithm
with a use of the back propagation neural networks. The
Inception Prediction
Actual activation functions in each hidden layers and output layer
Fault Type angle
position (%) have been varied, and the results obtained from the decision
(Degree) Output Error
algorithm have been investigated with the variation of fault
90 0.1 0.1125 0.0125 inception angles, fault types and fault locations. The results
Phase A to have illustrated that the use of Hyperbolic tangent sigmoid
150 0.1 0.1243 0.0243
ground fault function in the first and the second layers with Linear
(HV) 240 0.1 0.1121 0.0121
function in the output layer is the most appropriate scheme
300 0.1 0.1169 0.0169
for the internal fault detection of the transformer windings
as summarized in Table II. This technique should be useful
in checking and repairing the transformer when winding to
Table 4 : Results of phase A to ground fault at high voltage ground faults occur. The further work will be the
winding with various lengths of the winding. improvement of the algorithm so that positions of interturn
(Inception angle of 240o) faults along the windings of the transformer can be
Inception Prediction identified.
Actual
Fault Type angle
position (%)
(Degree) Output Error References
240 0.2 0.2005 0.0005 [1] A.G. Phadke and J.S. Thorp, A new computer-based
Phase A to
240 0.4 0.4054 0.0054 flux restrained current-differential relay for power
ground fault
240 0.6 0.6042 0.0042 transformer protection, IEEE Trans. Power Appar. Syst.
(HV)
240 0.8 0.8025 0.0025
PAS-102 (5) (1983) 3624-3629.
[2] T.S. Sidhu and M.S. Sachdev, On-line identification of
magnetizing inrush current and internal faults in three-
phase transformers, IEEE Trans. Power Delivery 7 (4)
Table 5 : Results of phase A to ground fault at low voltage
(1992) 1885-1891.
winding with various inception angles.
[3] Y.Zhang, X.Ding, Y.Liu and P.J. Griffin, An artificial
(Fault position at 10% in length of the winding)
neural network approach to transformer fault diagnosis,
Inception Prediction IEEE Trans. Power Delivery 11 (4) (1996) 1836-1841.
Actual
Fault Type angle [4] M.G. Morante and D.W. Nocoletti, A wavelet-based
position (%)
(Degree) Output Error differential transformer protection, IEEE Trans. Power
60 0.1 0.0886 0.0114 Delivery 14 (4) (1999) 1352-1358.
Phase A to [5] O.A.S. Youssef, A wavelet-base technique for
120 0.1 0.136 0.036
ground fault discrimination between faults and magnetizing inrush
210 0.1 0.1386 0.0386
(LV) currents in transformers, IEEE Trans. Power Delivery
330 0.1 0.0568 0.0432 18 (1) (2003) 170-176.
[6] P. Bastard, P. Bertrand and M. Meunier, A transformer
model for winding fault studies, IEEE Trans. Power
Table 6 : Results of phase A to ground fault at low voltage Delivery 9 (2) (1994) 690-699.
winding with various lengths of the winding. [7] S. M. Islam and G. Ledwich, Locating transformer
(Inception angle of 210o) faults through sensitivity analysis of high frequency
Inception modeling using transfer function approach, IEEE
Actual Prediction
Fault Type angle International Symposium on Electrical Insulation,
position (%) (1996) 38-41.
(Degree) Output Error
[8] Z. Q. Bo, M. A. Redfern and G. C. Weller, Positional
210 0.2 0.1943 0.0057 Protection of Transmission Line Using Fault
Phase A to
210 0.4 0.4265 0.0265 Generated High Frequency Transient Signals, IEEE
ground fault
210 0.6 0.5871 0.0129 Trans. Power Delivery 15 (3) (2000) 888-894.
(LV)
210 0.8 0.7968 0.0032 [9] P.Makming, S. Bunjongjit, A.Kunakorn, S.
Jiriwibhakorn and M. Kando, Fault diagnosis in
(IJCNS) International Journal of Computer and Network Security, 55
Vol. 1, No. 2, November 2009

transmission lines using wavelet transforms, in: Anantawat Kunakorn graduated with
Proceedings of IEEE Transmission and Distribution B.Eng (Hons) in electrical engineering from
Conference, pp. 2246-2250, 2002. King Mongkut’s Institute of Technology
[10] IEEE working group 15.08.09, Modeling and analysis Ladkrabang, Bangkok, Thailand in 1992. He
received his M.Sc in electrical power
of system transients using digital programs, IEEE PES
engineering from University of Manchester
special publication Institute of Science and Technology, UK in
[11] ABB Thailand, Test report no. 56039. 1996, and Ph.D. in electrical engineering
[12] “Switching and Transmission Line Diagram”, from Heriot-Watt University, Scotland, UK
Electricity Generation Authorisation Thailand (EGAT). in 2000. He is currently an associate professor at the department of
[13] A. Ngaopitakkul and A. Kunakorn, “Internal Fault electrical engineering, King Mongkut’s Institute of Technology
Classification in Transformer Windings using Ladkrabang, Bangkok, Thailand. He is a member of IEEE and IEE.
Combination of Discrete Wavelet Transforms and His research interest is electromagnetic transients in power
Back-propagation Neural Networks,” International systems.
Journal of Control, Automation, and Systems (IJCAS),
pp. 365-371, June 2006.

Authors Profile
Atthapol Ngaopitakkul graduated with
B.Eng, M.Eng and D.Eng in electrical
engineering from King Mongkut’s Institute
of Technology Ladkrabang, Bangkok,
Thailand 2002, 2004 and 2007 respectively.
His research interests are on the
applications of wavelet transform and neural
networks in power system analysis.
56 (IJCNS) International Journal of Computer and Network Security,
Vol. 1, No. 2, November 2009

A Proposed SAFER Plus Security algorithm using


Fast Psuedo Hadamard Transform (FPHT) with
Maximum Distance Separable code for
Bluetooth Technology
D.Sharmila1, R.Neelaveni2
1
(Research Scholar), Associate Professor, Bannari Amman Institute of Technology, Sathyamangalam.
Tamil Nadu-638401.sharmiramesh@rediffmail.com
2
Asst.Prof. PSG College of Technology, Coimbatore.Tamil Nadu -638401.
rnv@eee.psgtech.ac.in

Abstract: In this paper, comparison of various security In the past, cable and infrared light connectivity methods
algorithms like pipelined AES, triple DES, Elliptic Curve Diffie were used. The cable solution is complicated since it
Hellman (ECDH), Existing SAFER plus and Proposed SAFER+ requires special connectors, cables and space. This produces
algorithm for Bluetooth security systems are done. Performance a lot of malfunctions and connectivity problems. The
of the above security algorithms are evaluated based on the
parameters - data throughput, frequency and security level of infrared solution requires line of sight. The lists of draw
the algorithms. The existing SAFER+ algorithm is modified to backs due to cables are
achieve higher data throughput, frequency and security level. • A tangle of cables
On comparison, Proposed SAFER+ algorithm proves to be • Varying standards of cables and connectors
better for implementation in Bluetooth devices than the Existing • Unreliable galvanic connections
algorithms. • Need to keep cables and connectors on store
• Awkward to move computerized units to different
Keywords: Secure And Fast Encryption Routine, Triple
• locations, as cables might not be long enough
Data Encryption Standard, Pipelined Advanced Encryption
• Need for manual switches when the number of physical
Standard, Elliptic Curve Diffie Hellmann, Pseudo
ports are not sufficient for the need at hand
Hadamard Transform, Encryption and Decryption.
There are several security algorithms available to ensure the
1. Introduction security in wireless network devices. Some of the major
Wireless communication is one of the vibrant areas in methods are AES, DES, Triple DES, IDEA, BLOWFISH,
communication field today while it has been the topic of SAFER plus, ECDH etc. The SAFER+ algorithm is based
study since 1960’s, the past decade has seen a surge of on the existing SAFER family of ciphers. Although
research activities in the area. This is due to the confluence SAFER+ is the most widely used algorithm, it seems to have
of several factors. First there has been an explosive increase some vulnerabilities. The objective is to compare the various
in the demand for tether less connectivity driven so far security algorithms like pipelined AES, triple DES , Elliptic
mainly by cellular telephony but expected to be soon Curve Diffie Hellman (ECDH), Existing SAFER+ and
eclipsed by wireless data applications. Second the dramatic Proposed SAFER+ algorithm The Proposed SAFER+
progress in VLSI technology has enabled small-area and algorithm has rotation block between every round of
low-power implementation of sophisticated signal Existing SAFER+, PHT is replaced by Fast Psuedo
processing algorithms and coding techniques. Third the Hadamard transform (FPHT) and first round inputs are
success of third-generation (3G) digital wireless standards, added or ored with the third round inputs and fifth round
in particular, the IS-95 Code Division Multiple Access inputs are added or ored with the seventh round inputs.
(CDMA) standard, provides a concrete demonstration that Thus the proposed SAFER+ has higher data throughput and
good ideas from communication theory have a significant frequency. This proves that proposed SAFER+ algorithm
impact in practice has better data throughput and frequency than the existing
algorithms.
Wireless communication technology has advanced at a very
fast pace during the last years, creating new applications In this paper, section 2 describes the overview of Bluetooth
and opportunities. In addition, the number of computing and security architecture. Section 3 deals with the Existing
telecommunications devices is increasing. Special attention SAFER+ algorithm. The Proposed SAFER+ algorithm is
explained in section 4. A section 5 deal with the results,
has to be given in order to connect efficiently these devices.
Section 6 refers the conclusion.
Any electronic devices can be communicated without wire.
(IJCNS) International Journal of Computer and Network Security, 57
Vol. 1, No. 2, November 2009

• The non-linear layer (use of the non-linear


2. Bluetooth Architecture functions e and l). The e function is implemented
as y = 45x in GF(257), except that 45128 = 0. The l
Connection types define the possible ways Bluetooth devices function is implemented as y = log45(x) in
can exchange data. Bluetooth has three connection types: GF(257), except that log45(0) = 128.
ACL (Asynchronous Connection-Less), SCO (Synchronous • The mixed addition/XOR subunit, which combines
Connection-Oriented) and (eSCO) extended Synchronous data with the round sub key K2r
Connection Oriented. ACL links are for symmetric • The four linear Pseudo-Hadamard Transformation
(maximum of 1306.9 kb/s for both directions) or asymmetric layers, connected through an “Armenian Shuffle”
(maximum of 2178.1 kb/s for send and 177.1 kb/s for permutation.
receive) data transfer. Retransmission of packets is used to
ensure integrity of data. SCO links are symmetric
(maximum of 64 kb/s for both directions) and they are used
for transferring real-time two-way voice. Retransmission of
voice packets is not used. Therefore, when the channel BER
is high, voice can be distorted. eSCO links are also
symmetric (maximum of 864 kb/s for both directions) and
they are used for transferring real-time two-way voice.
Retransmission of packets is used to ensure the integrity of
data (voice). Because retransmission of packets is used,
eSCO links can also carry data packets, but they are mainly
used for real-time two-way voice. Only Bluetooth 1.2 or
2.0+EDR devices can use eSCO links, but SCO links must
also be supported to provide backward-compatibility.
Bluetooth devices that communicate with each other form a
piconet. [7] [8] [9].but only two of them actually provide
confidentiality. The modes are as follows: Figure1. SAFER Plus single round
3. Description of SAFER Plus Algorithm The implementation of the non-linear layer using a data-
mapping component that produces the X1 and X2 bytes is
The SAFER+ (Secure And Fast Encryption Routine) done. These bytes are the input of the non-linear functions e
algorithm is based on the existing SAFER family of ciphers, and l. During one round, we execute e and l eight times.
which comprises the ciphers SAFER K-64, SAFER K-128, This design significantly reduces the required silicon area.
SAFER SK-128. All algorithms are byte-oriented block Each function is implemented using 256 bytes of ROM.
encryption algorithms, which are characterized by the After the SAFER+ single round in the encryption data path
following two properties. First, they use a non-orthodox is a mixed XOR/addition (or key odd addition) component.
linear transformation, which, is called Pseudo-Hadamard-
Transformation (PHT) for the desired diffusion, and second, 4. Proposed SAFER Plus Algorithm
they use additive constant factors (Bias vectors) in the
scheduling for weak keys avoidance. [5]

It consists of two main units: the encryption data path and


the key-scheduling unit. The key-scheduling unit allows on-
the-fly computation of the round keys. To reduce the silicon
area, we used eight loops of a key scheduling single-round
implementation. Round keys are applied in parallel in the
encryption data path. The full Safer+ algorithm execution
requires eight loops of the single round. We chose the
single-round hardware implementation solution because,
with this minimum silicon area, we could achieve the
required throughput. The encryption data path’s first
component is an input register, which combines the
plaintext and the feedback data produced in the previous
round. The input register feeds the safer+ single round.

3.1 SAFER + Single round


A Safer+ single round has four subunits:
• The mixed XOR/addition subunit, which combines
data with the appropriate round sub key K2r–1.
Figure 2. Proposed SAFER+ for encryption
58 (IJCNS) International Journal of Computer and Network Security,
Vol. 1, No. 2, November 2009

The Existing SAFER+ algorithm is modified to provide Figure 4 shows the proposed SAFER+ single round
higher data throughput and frequency. The modified architecture. In this, fast algorithms for Psuedo Hadamard
SAFER+ algorithm has three modifications when compared Transform with MDS code are used to implement pattern
to the existing one. matching most efficiently instead of PHT. The FPHT has
been analyzed with respect to its speed and security. The
transform has a provably bounded branch value for any
given dimension as well as a fast implementation which
requires at most O(NlogN) time to complete. It is possible to
join the FPHT and MDS to create a fast transform that has
higher branch than the FPHT alone.

4.1 Fast Psuedo Hadamard Transform Algorithm


FPHT has several efficient means of implementation which
make the design construct very flexible. The FPHT can be
characterized by a recursive linear transform defined by the
relationship

-- (1)
It is provably non-singular since the two vectors [2, 1] and
[1, 1] linearly independent.

An emerging block cipher and one-way hash function


Figure 3. Proposed SAFER+ for Decryption
design construct is the Maximum Distance Separable
(MDS) code. The goal of the MDS code is to promote a high
(i) Rotation block is introduced between every
branch through the linear components of the design to
round. Rotation is towards left for encryption
ensure a correspondingly low differential and linear “prop-
and towards right for decryption
ratio". An MDS code of dimension NxN requires O (N2)
(ii) The input of round 1 and the output of round 2
time to complete. MDS and FPHT codes can be combined to
are Xor/Add Modulo 16 byte-by-byte to form
produce fast transforms with branch numbers much higher
the input of round 3. Similarly the input of
than the comparable dimension unmodified FPHT.Any
round 5 and the output of round 6 are Xor/Add
FPHT requires at most O (n log n) time to complete which
Modulo 16 byte-by-byte to form the input of
scales nicely compared to an equal dimension MDS code
round 7.
which requires O (n2) time. More specifically with O(n)
(iii) Instead of PHT layer, Fast Psuedo Hadamard
space an FPHT requires only O(logn) time to complete.In
Transform (FPHT) is used.
hardware designs the actual transform is very efficient.
Since only the H1 transform must be implemented directly
The Encryption and the Decryption block diagrams are
as shown in figure 5. A trivial multiplication by p(x) = x is
given in the figure 2 and figure 3.The proposed work is to
all that is required.
replace the Pseudo Hadamard Transform by Fast Psuedo
Hadamard transform.

Figure 5. H3 as a three layer network

5. Results
Figure 4. Proposed SAFER+ single round Various existing algorithms are analyzed and compared
with the proposed algorithm based on the parameters such
(IJCNS) International Journal of Computer and Network Security, 59
Vol. 1, No. 2, November 2009

as encryption frequency, Data throughput and security level form the input of round 3.
and the results are shown in the bar charts.
ENCRYPTION TIME (milliseconds)
ENCRYPTION TIME (ms)

120
100 99
6. Conclusion
88.33
80 78.08
60 65.44 58.33 In this paper, a modified SAFER plus algorithm is proposed
40
20
by replacing PH transform with FPH transform and MDS
0 code and introducing a rotation block for every round. The
Triple DES Pipelined ECDH Safer Plus Safer Plus
AES (FPHT with existing security algorithms are compared with the proposed
MDS)
SAFER Plus algorithm. The entire design is captured in
ALGORITHMS
J2ME. The efficiency of the algorithm is evaluated by the
analysis of parameters like encryption time, encryption
Figure 6. Encryption Time Vs Various Algorithms frequency, and data throughput and security level. On
Based on the analysis, the modified Safer Plus algorithm comparison, the modified Safer plus algorithm proved to be
required minimum encryption time and maximum better for implementation in Bluetooth devices than the
encryption frequency when compared to all the existing existing algorithms.
algorithms due to the inclusion of FPHT instead of PHT
layer as shown in Figure 6 and Figure 7. References
ENCRYPTION FREQUENCY (kilohz)
[1] Paraskevas kitos, Nicolas sklavos, Kyriakos
FREQUENCY (kilohz)

25
Papadomanolakis and Odysseas Koufopavlou university
ENCRYPTION

20 19.14
15
12.56
14.82
16.63
of patras, Greece,” Hardware Implementation of
10 10.1
5 Bluetooth Security” IEEE CS and IEEE
0 Communications Society - January to March 2003, pp.
Triple DES Pipelined ECDH Safer Plus Safer Plus
AES (FPHT with 21 to 29.
MDS)
ALGORITHMS
[2] Karen Scarfone John Padgette, “Guide to Bluetooth
security National Institute of standards and technology
Figure 7. Encryption Frequency Vs Various Algorithms Special Publication 800-121, U.S. Department of
Commerce 43 pages.
The number of hits required to attack the various algorithms [3] Vainio, Juha T. “Bluetooth Security,” Helsinki
is also compared and shown in Figure 8. The security level University of Technology, 25 May 2000.
is much enhanced for the algorithm proposed since the [4] Gyongsu Lee, ”Bluetooth Security Implementation based
number of hits is found to be maximum due to the on Software Oriented Hardware-Software Partition”
introduction of the rotation block between every round. IEEE journal 2005. pp. 2070-2074.
[5] Kardach, James, “Bluetooth Architecture Overview,”
HITS COUNTS FOR ATTACK Intel Technology Journal, 2000
[6] Jyrki Oraskari, "Bluetooth versus wlan ieee 802.11x",
HITS COUNTS FOR

1200000
1125852
1000000 943999 873736 Helsinki university of technology, October, 2001.
ATTACK

800000
722519
600000
688787 [7] A. Laurie and B.Laurie. serious flaws in blue tooth
400000 security lead to disclosure of personal data.
200000
0 http://bluestumbler.com.
Triple DES Pipelined ECDH Saf er Plus Saf er Plus
AES (FPHT
[8] Brent A.Miller And Chatschik Bisdikian “Bluetooth
w ith MDS) revealed” – low price edition.
ALGORITHMS
[9] Wikipedia.org, “Bluetooth,” Wikipedia.org, 5 March
2005, http://en.wikipedia.org/wiki/Bluetooth (21
Figure 8. No. of Hit counts Vs Various algorithms February 2005)
[10] Vrije Universiteit Brussel, “Bluetooth security” phd
DATA THROUGHPUT (Bytes)
thesis, December 2004
DATA THROUGHPUT

100
80
[11] J. L. Massey, “On the Optimality of SAFER+
(Bytes)

60 Diffusion”, Second Advanced Encryption Standard


40
20 Candidate Conference (AES2), Rome, Italy, March 22-
0
Triple DES Pipelined ECDH Safer Plus Safer Plus
23 online available at
AES (FPHT with
MDS)
http://csrc.nist.gov/encryption/aes/round1/conf2/aes2co
ALGORITHMS nf.htm.

Figure 9. Data throughput Vs various algorithm

Figure 9 shows that modified Safer plus algorithm has


higher data throughput comparatively because the input of
round 1 and the output of round 2 are Xor/Add operation to
60 (IJCNS) International Journal of Computer and Network Security,
Vol. 1, No. 2, November 2009

Mining Fuzzy Multidimensional Association


Rules Using Fuzzy Decision Tree Induction
Approach*
Rolly Intan1、 Oviliani Yenty Yuliana2, Andreas Handojo3
Department of Informatics Engineering, Petra Christian University,
Siwalankerto 121-131, Surabaya 60236, Indonesia
1
rintan@petra.ac.id, 2ovi@petra.ac.id, 3handojo@petra.ac.id

Database as discussed in [4]. The process of


Abstract: Mining fuzzy multidimensional association rules is
one of the important processes in data mining application. This denormalization can be provided based on the relation of
paper extends the concept of Decision Tree Induction (DTI) tables as presented in Entity Relationship Diagram (ERD) of
dealing with fuzzy value in order to express human knowledge a relational database. Result of this process is a general
for mining fuzzy multidimensional association rules. Decision (denormalized) table. Second is the process of constructing
Tree Induction (DTI), one of the Data Mining classification FDT generated from the denormalized table.
methods, is used in this research for predictive problem solving
in analyzing patient medical track records. Meaningful fuzzy
labels (using fuzzy sets) can be defined for each domain data.
For example, fuzzy labels poor disease, moderate disease, and
severe disease are defined to describe a condition/type of
disease. We extend and propose a concept of fuzzy information
gain to employ the highest information gain for splitting a node.
In the process of generating fuzzy multidimensional association
rules, we propose some fuzzy measures to calculate their
support, confidence and correlation. The designed application
gives a significant contribution to assist decision maker for
analyzing and anticipating disease epidemic in a certain area. Figure 1. Process of mining association rules
Keywords: Data Mining, Classification, Decision Tree In the process of constructing FDT, we propose a method
Induction, Fuzzy Set, Fuzzy Association Rules. how to calculate fuzzy information gain by extending the
existed concept of (crisp) information gain to employ the
1. Introduction highest information gain for splitting a node. The last is the
process of mining fuzzy association rules. In this process,
Decision Tree Induction (DTI) has been used in machine
fuzzy association rules are mined from FDT. In the process
learning and in data mining as a model for prediction a
of mining fuzzy association rules, we propose some fuzzy
target value based on a given relational database. There are
measures to calculate their support, confidence and
some commercial decision tree applications, such as the
correlation. Minimum support, confidence and correlation
application for analyzing a return payment of a loan for
can be given to reduce the number of mining fuzzy
owning or renting a house [16] and the application of
association rules. The designed application gives a
software quality classification based on the program
significant contribution to assist decision maker for
modules risk [17]. Both applications inspire this research to
analyzing and anticipating disease epidemic in a certain
develop an application for analyzing patient medical track
area.
record. The Application is able to present relation among
The structure of the paper is the following. Section 2
(single/group) values of patient attribute in decision tree
discusses denormalized process of data. Section 3 gives a
diagram. In the developed application, some domains of
basic concept of association rules. Definition and
data need to be utilized by meaningful fuzzy labels. For
formulation of some measures such as support, correlation
example, fuzzy labels poor disease, moderate disease, and
and confidence rule as used for determining interestingness
severe disease describe a condition/type of disease; young,
of the association rules are briefly recalled. Section 4, as
middle aged and old are used as the fuzzy labels of ages.
main contribution of this paper is devoted to propose the
Here, a fuzzy set is defined to express a meaningful fuzzy
concept and algorithm for generating FDT. Section 5
label. In order to utilize the meaningful fuzzy labels, we
proposes some equations of fuzzy measures that play
need to extend the concept of (crisp) DTI using fuzzy
important role in the process of mining fuzzy
approach. Simply, the extended concept is called Fuzzy
multidimensional association rules. Section 6 demonstrates
Decision Tree (FDT). To generate FDT from a normalized
the algorithm and in a simple illustrative results. Finally a
database that consists of several tables, there are several
conclusion is given in Section 7.
sequential processes as shown in Figure 1. First is the
process of joining tables known as Denormalization of
(IJCNS) International Journal of Computer and Network Security, 61
Vol. 1, No. 2, November 2009

2. Denormalization Data
In general, the process of mining data for discovering
association rules has to be started from a single table
(relation) as a source of data representing relation among
item data. Formally, a relational data table [13] R consists of
a set of tuples, where ti represents the i-th tuple and if there
are n domain attributes D, then ti = ( d i1 , d i 2 , L , d in ).
Here, dij is an atomic value of tuple ti with the restriction to
the domain Dj, where d ij ∈ D j . Formally, a relational data
table R is defined as a subset of the set of cross Figure 2. Example of ERD Physical Design
product D1 × D2 ×L× Dn , where D = {D1 , D2 , L, Dn } . Tuple
t (with respect to R) is an element of R. In general, R can be From the example, it is clearly seen that there are four
shown in Table 1. tables: A, B, C and D. Here, all tables are assumed to be
independent for they have their own primary keys.
Table 1: A Schema of Relational Data Table Cardinality of relationship between Table A and C is
supposed to be one to many relationships. It is similar to
relationship between Table A and B as well as Table B and
Tuples D1 D2 L Dn D. Table A consists of four domains/fields, D1, D2, D3 and
D4; Table B also consists of four domains/fields, D1, D5,
t1 d11 d12 L d1n
D6 and D7; Table C consists of three domains/fields, D1,
t2 d21 d22 L d2n D8 and D9; Table D consists of four domains/fields, D10,
D11, D12 and D5. Therefore, there are totally 12 domains
M M M O M data as given by D={D1, D2, D3, …, D11, D12}.
tr dr1 dr2 L drn Relationship between A and B is conducted by domain D1.
Table A and C is also connected by domain D1. On the
other hand, relationship between B and D is conducted by
A normalized database is assumed as a result of a process of D5. Relation among A, B, C and D can be also represented
normalization data in a certain contextual data. The by graph as shown in Figure 3.
database may consist of several relational data tables in
{D1} {D1} {D5}
which they have relation one to each others. Their relation
C A B D
may be represented by Entities Relationship Diagram
(ERD). Hence, suppose we need to process some domains Figure 3. Graph Relation of Entities
(columns) data that are parts of different relational data
tables, all of the involved tables have to be combined Metadata expressing relation among four tables as given in
(joined) together providing a general data table. Since the the example can be simply seen in Table 2.
process of joining tables is an opposite process of
normalization data by which the result of general data table Table 2: Example of Metadata
is not a normalized table, simply the process is called Table-1 Table-2 Relations
Denormalization, and the general table is then called Table A Table B {D1}
denormalized table. In the process of denormalization, it is Table A Table C {D1}
not necessary that all domains (fields) of the all combined
Table B Table D {D5}
tables have to be included in the targeting table. Instead, the
targeting denormalized table only consists of interesting
Through the metadata as given in the example, we may
domains data that are needed in the process of mining rules.
construct six possibilities of denormalized table as shown in
The process of denormalization can be performed based on
Table 3.
two kinds of data relation as follows.
Table 3: Possibilities of Denormalized Tables
2.1. Metadata of the Normalized Database No. Denormalized Table
Information of relational tables can be stored in a metadata. 1 CA(D1,D2,D3,D4,D8,D9);
Simply, a metadata can be stored and represented by a table. CA(D1,D2,D8,D9);
Metadata can be constructed using the information of CA(D1,D3,D4,D9), etc.
relational data as given in Entity Relationship Diagram 2 CAB(D1,D2,D3,D4,D8,D9,D5,D6,D7),
(ERD). For instance, given a symbolic ERD physical design CAB(D1,D2,D4,D9,D5,D7), etc.
is arbitrarily shown in Figure 2. 3 CABD(D1,D2,D3,D4,D5,D6,D7,D8,D9,
D10,D11,D12), etc.
4 AB(D1,D2,D3,D4,D5,D6,D7), etc.
5 ABD(D1,D2,D3,D4,D5,D6,D7,D10,
D11,D12), etc.
6 BD(D5,D6,D7,D10,D11,D12), etc.
62 (IJCNS) International Journal of Computer and Network Security,
Vol. 1, No. 2, November 2009

CA(D1,D2,D3,D4,D8,D9) means that Table A and C are ⇒ Lung Cancer” representing relation between
joined together, and all their domains are participated as a “Bronchitis” and “Lung Cancer” which can also be written
result of joining process. It is not necessary to take all as a single dimensional association rule as follows:
domains from all joined tables to be included in the result,
e.g. CA(D1,D2,D8,D9), CAB(D1,D2,D4,D9,D5,D7) and so Rule-1
on. In this case, what domains included as a result of the Dis ( X , " Bronchitis " ) ⇒ Dis ( X , " Lung Cancer" ),
process depends on what domains are needed in the process
of mining rules. For D1, D8 and D5 are primary key of where Dis is a given predicate and X is a variable
Table A. C and B, they are mandatory included in the representing patient who have a kind of disease (i.e.
result, Table CAB. “Bronchitis” and “Lung Cancer”). In general, “Lung
Cancer” and “Bronchitis” are two different data that are
2.2. Table and Function Relation
taken from a certain data attribute, called item. In general,
It is possible for user to define a mathematical function (or Apriori [1,10] is used an influential algorithm for mining
table) relation for connecting two or more domains from two frequent itemsets for mining Boolean (single dimensional)
different tables in order to perform a relationship between association rules.
their entities. Generally, the data relationship function Additional related information regarding the identity of
performs a mapping process from one or more domains patients, such as age, occupation, sex, address, blood type,
from an entity to one or more domains from its partner etc., may also have a correlation to the illness of patients.
entity. Hence, considering the number of domains involved Considering each data attribute as a predicate, it can
in the process of mapping, it can be verified that there are therefore be interesting to mine association rules containing
four possibility relations of mapping. multiple predicates, such as:
Let A( A1 , A2 , L , An ) and B ( B1 , B2 ,L, Bm ) be two
different entities (tables). Four possibilities of function f Rule-2:
performing a mapping process are given by: Age( X , "60") ∧ Smk( X , " yes" ) ⇒ Dis( X , " Lung Cancer"),
o One to one relationship
f : Ai → Bk where there are three predicates, namely Age, Smk
(smoking) and Dis (disease). Association rules that involve
o One to many relationship
two or more dimensions or predicates can be referred to as
f : Ai → B p1 × B p2 × L × B pk multidimensional association rules. Multidimensional
o Many to one relationship association rules with no repeated predicate as given by
f : Ar1 × Ar2 × L× Ark → Bk Rule-2, are called interdimension association rules [1]. It
may be interesting to mine multidimensional association
o Many to many relationship rules with repeated predicates. These rules are called hybrid-
f : Ar1 × Ar2 × L × Ark → B p1 × B p2 × L × B pk dimension association rules, e.g.:
Obviously, there is no any requirement considering type and
size of data between domains in A and domains in B. All Rule-3:
connections, types and sizes of data are absolutely dependent Age( X , "60" ) ∧ Smk ( X , " yes" ) ∧ Dis( X , " Bronchitis " )
on function f. Construction of denormalization data is then ⇒ Dis( X , " Lung Cancer" ),
performed based on the defined function.
To provide a more meaningful association rule, it is
3. Fuzzy Multidimensional Association Rules necessary to utilize fuzzy sets over a given database attribute
Association rule finds interesting association or called fuzzy association rule as discussed in [4,5]. Formally,
correlation relationship among a large data set of items given a crisp domain D, any arbitrary fuzzy set (say, fuzzy
[1,10]. The discovery of interesting association rules can set A) is defined by a membership function of the form [2,8]:
help in decision making process. Association rule mining
that implies a single predicate is referred as a single A : D → [0,1]. (1)
dimensional or intradimension association rule since it
contains a single distinct predicate with multiple A fuzzy set may be represented by a meaningful fuzzy
occurrences (the predicate occurs more than once within the label. For example, “young”, “middle-aged” and “old” are
rule). The terminology of single dimensional or fuzzy sets over age that is defined on the interval [0, 100] as
intradimension association rule is used in multidimensional arbitrarily given by[2]:
database by assuming each distinct predicate in the rule as a
dimension [1].
Here, the method of market basket analysis can be
extended and used for analyzing any context of database.
For instance, database of medical track record patients is
analyzed for finding association (correlation) among
diseases taken from the data of complicated several diseases
suffered by patients in a certain time. For example, it might
be discovered a Boolean association rule “Bronchitis
(IJCNS) International Journal of Computer and Network Security, 63
Vol. 1, No. 2, November 2009
support ( A ⇒ B ) = support( A ∪ B )
1 , x ≤ 20 # tuples ( A and B)

young ( x ) = ( 35 − x ) / 15 , 20 < x < 35
= , (3)
# tuples (all _ data )
0 , x ≥ 35

0 , x ≤ 20 or x ≥ 60 where #tuples(all_data) is the number of all tuples in the
( x − 20 ) / 15 , 20 < x < 35
.
 relevant data tuples (or transactions).
middle _ aged ( x ) =  For example, a support 30% for the association rule (e.g.,
( 60 − x ) / 15 , 45 < x < 60 Rule-1) means that 30% of all patients in the all data
1 , 35 ≤ x ≤ 45
medical records are infected both bronchitis and lung
0 , x ≤ 45 cancer. From (3), it can be followed

old ( x ) = ( x − 45 ) / 15 , 45 < x < 60 support(A ⇒ B) = support(B ⇒ A). Also, (2) can be
1 , x ≥ 60
 calculated by
Using the previous definition of fuzzy sets on age, an
example of multidimensional fuzzy association rule relation support ( A ∪ B )
confidence ( A ⇒ B ) = , (4)
among the predicates Age, Smk and Dis may then be support ( A)
represented by:
Correlation factor is another kind of measures to evaluate
Rule-4
correlation between A and B. Simply, correlation factor can
Age( X , " young") ∧ Smk( X , " yes") ⇒ Dis( X , " Bronchitis")
be calculated by:
3.1. Support, Confidence and Correlation correlation ( A ⇒ B) = correlation( B ⇒ A)
Association rules are kind of patterns representing support ( A ∪ B)
= , (5)
correlation of attribute-value (items) in a given set of data support ( A) × support( B)
provided by a process of data mining system. Generally,
association rule is a conditional statement (such kind of if- Itemset A and B are dependent (positively correlated) iff
then rule). More formally [1], association rules are the
correlatio n( A ⇒ B ) > 1 . If the correlation is equal to 1,
form A ⇒ B , that is,
then A and B are independent (no correlation). Otherwise, A
a1 ∧ L ∧ am ⇒ b1 ∧ L ∧ bn , where ai (for i∈ and B are negatively correlated if the resulting value of
{1,…,m}) and b j (for j ∈ {1,…,n}) are two items (attribute- correlation is less than 1.
A data mining system has the potential to generate a
value). The association rule A ⇒ B is interpreted as huge number of rules in which not all of the rules are
“database tuples that satisfy the conditions in A are also interesting. Here, there are several objective measures of
likely to satisfy the conditions in B”. A = {a1 , L , am } and rule interestingness. Three of them are measure of rule
B = {b1 , L , bn } are two distinct itemsets. Performance or support, measure of rule confidence and measure of
correlation. In general, each interestingness measure is
interestingness of an association rule is generally associated with a threshold, which may be controlled by the
determined by three factors, namely confidence, support and user. For example, rules that do not satisfy a confidence
correlation factors. Confidence is a measure of certainty to threshold (minimum confidence) of, say 50% can be
assess the validity of the rule. Given a set of relevant data considered uninteresting. Rules below the threshold
tuples (or transactions in a relational database) the (minimum support as well as minimum confidence) likely
confidence of “ A ⇒ B ” is defined by: reflect noise, exceptions, or minority cases and are probably
of less value. We may only consider all rules that have
# tuples ( A and B) positive correlation between its itemsets.
confidence ( A ⇒ B) = , (2) As previously explained, association rules that involve
# tuples ( A) two or more dimensions or predicates can be referred to as
multidimensional association rules. Multidimensional rules
where #tuples(A and B) means the number of tuples with no repeated predicates are called interdimension
containing A and B. association rules (e.g. Rule-2)[1]. On the other hand,
For example, a confidence 80% for the Association Rule (for multidimensional association rules with repeated predicates,
example Rule-1) means that 80% of all patients who which contain multiple occurrences of some predicates, are
infected bronchitis are likely to be also infected lung cancer. called hybrid-dimension association rules. The rules may be
The support of an association rule refers to the percentage of also considered as combination (hybridization) between
relevant data tuples (or transactions) for which the pattern intradimension association rules and interdimension
of the rule is true. For the association rule “ A ⇒ B ” where association rules. Example of such rule are shown in Rule-3,
the predicate Dis is repeated. Here, we may firstly be
A and B are the sets of items, support of the rule can be
interested in mining multidimensional association rules with
defined by
no repeated predicates or interdimension association rules.
The interdimension association rules may be generated
from a relational database or data warehouse with multiple
64 (IJCNS) International Journal of Computer and Network Security,
Vol. 1, No. 2, November 2009

attributes by which each attribute is associated with a support( A ⇒ B) = support( A ∪ B)


predicate. To generate the multidimensional association
| {ti | d ij = c j , ∀c j ∈ A ∪ B} | (10)
rules, we introduce an alternative method for mining the =
rules by searching for the predicate sets. Conceptually, a | QD( D A ∪ DB ) |
multidimensional association rule, A ⇒ B consists of A
and B as two datasets, called premise and conclusion, confidence ( A ⇒ B) as a measure of certainty to assess the
respectively. validity of A ⇒ B is calculated by
Formally, A is a dataset consisting of several distinct data,
where each data value in A is taken from a distinct domain | {t i | d ij = c j , ∀c j ∈ A ∪ B} | (11)
attribute in D as given by confidence ( A ⇒ B ) =
| {t i | d ij = a j , ∀a j ∈ A} |
A = {a j | a j ∈ D j , for some j ∈ N n } ,
where, D A ⊆ D is a set of domain attributes in which all If support(A) is calculated by (6) and denominator of (10) is
data values of A come from. changed to r, clearly, (10) can be proved having relation as
Similarly, given by (4).
B = {b j | b j ∈ D j , for some j ∈ N n } , A and B in the previous discussion are datasets in which
each element of A and B is an atomic crisp value. To
where, DB ⊆ D is a set of domain attributes in which all provide a generalized multidimensional association rules,
data values of B come from. instead of an atomic crisp value, we may consider each
For example, from Rule-2, it can be found that A={60, yes}, element of the datasets to be a dataset of a certain domain
B={Lung Cancer}, DA={Age, Smk} and DB={Dis}. attribute. Hence, A and B are sets of set of data values. For
Considering A ⇒ B is an interdimension association rule, example, the rule may be represented by
it can be proved that | D A |=| A | , | DB |=| B | and
Rule-5:
DA ∩ DB = ∅ . Age ( X , "20...60" ) ∧ Smk ( X , " yes" ) ⇒
Support of A is then defined by:
Dis ( X , " bronchitis , lung cancer" ),
| {ti | d ij = a j , ∀a j ∈ A} |
support( A) = , (6) where A={{20…29}, {yes}} and B={{bronchitis, lung
r cancer}}.
Simply, let A be a generalized dataset. Formally, A is given
where r is the number of records or tuples (see Table 1). by
Alternatively, r in (6) may be changed to |QD(DA)| by A = { A j | A j ⊆ D j , for some j ∈ N n } .
assuming that records or tuples, involved in the process of Corresponding to (7), support of A is then defined by:
mining association rules are records in which data values of
a certain set of domain attributes, DA, are not null data.
| {ti | d ij ∈ A j , ∀A j ∈ A} |
Hence, (6) can be also defined by: support( A) = . (12)
| QD( D A ) |
| {ti | d ij = a j , ∀a j ∈ A} | Similar to (10),
support( A) = , (7)
| QD ( D A ) |
support( A ⇒ B) = support( A ∪ B)
where QD(DA), simply called qualified data of DA, is
defined as a set of record numbers (ti ) in which all data | {t i | d ij ∈ C j , ∀C j ∈ A ∪ B} | (13)
=
values of domain attributes in DA are not null data. | QD( D A ∪ DB ) |
Formally, QD(DA) is defined as follows.
Finally, confidence ( A ⇒ B) is defined by
QD( D A ) = {t i | t i ( D j ) ≠ null, ∀D j ∈ D A } . (8)
| {t i | d ij ∈ C j , ∀C j ∈ A ∪ B} | (14)
Similarly, confidence ( A ⇒ B) =
| {t i | d ij ∈ A j , ∀A j ∈ A} |
| {t i | d ij = b j , ∀b j ∈ B} |
support(B) = . (9)
| QD( DB ) | To provide a more generalized multidimensional
association rules, we may consider A and B as sets of fuzzy
As defined in (3), support ( A ⇒ B) is given by labels. Simply, A and B are called fuzzy datasets. Rule-4 is
an example of such rules, where A={young, yes} and
B={bronchitis}. A fuzzy dataset is a set of fuzzy data
consisting of several distinct fuzzy labels, where each fuzzy
label is represented by a fuzzy set on a certain domain
attribute. Let A be a fuzzy dataset. Formally, A is given by
(IJCNS) International Journal of Computer and Network Security, 65
Vol. 1, No. 2, November 2009

A = { A j | A j ∈ F( D j ), for some j ∈ N n } , chosen as the test attribute for the current node. This
attribute minimizes the information needed to classify the
where F( D j ) is a fuzzy power set of Dj, or in other words, samples in the resulting partitions and reflects the least
Aj is a fuzzy set on Dj. randomness or impurity in these partitions. In order to
Corresponding to (7), support of A is then defined by: process crisp data, the concept of information gain measure
is defined in [1] by the following definitions.
r Let S be a set consisting of s data samples. Suppose the
∑ inf {A (dA j ∈A
j ij )} class label attribute has m distinct values defining m distinct
support( A) = i =1
. (15) classes, Ci (for i=1,…, m). Let si be the number of samples
| QD ( DA ) | of S in class Ci . The expected information needed to classify
Similar to (10), a given sample is given by

support( A ⇒ B) = support( A ∪ B) m
I (s1 , s2 ,..., sm ) = −∑ pi log 2 ( pi ) (19)
r
i =1
∑ inf {C j (dij )}
C j ∈ A∪ B
(16)
where pi is the probability that an arbitrary sample belongs
= i =1
to class Ci and is estimated by si /s.
| QD( DA ∪ DB ) | Let attribute A have v distinct values, {a1, a2, …, av}.
Attribute A can be used to partition S into v subsets, {S1, S2,
Confidence ( A ⇒ B ) is defined by …, Sv}, where Sj contains those samples in S that have value
r aj of A. If A was selected as the test attribute then these
∑ inf {C j (d ij )}
C j ∈A∪ B
subsets would correspond to the braches grown from the
confidence( A ⇒ B) = i =1
r
(17) node containing the set S. Let sij be the number of samples
∑ inf { A (d
i =1
A j∈ A
j ij )} of class Ci in a subset Sj. The entropy, or expected
information based on the partitioning into subsets by A, is
given by
Finally, correlatio n ( A ⇒ B) is defined by
r v s1 j + ... + s mj
∑ inf {C(dij )}
C j ∈A∪B
E ( A) = ∑ I ( s1 j ,..., smj ) (20)
(18) s
correlation( A ⇒ B) = r
i=1
j =1

∑ inf {A(d
i =1
Aj ∈A
ij )}× inf {B(dik )}
Bk ∈B The term
sij + ... + s mj
acts as the weight of the jth subset
s
and is the number of samples in the subset divided by the
Similarly, if denominators of (15) and (16) are changed to r
total number of samples in S. The smaller the entropy value,
(the number of tuples), (17) can be proved also having
the greater the purity of the subset partitions.The encoding
relation as given by (4). Here, we may consider and prove
information that would be gained by branching on A is
that (16) and (17) are generalization of (13) and (14),
respectively. On the other hand, (13) and (14) are
Gain(A)=I(s1, s2,…, sm) – E(A) (21)
generalization of (10) and (11).
In other words, Gain(A) is the expected reduction in entropy
4. Fuzzy Decision Tree Induction (FDT) caused by knowing the values of attribute A.
Based on type of data, we may classify DTI into two types, When using the fuzzy value, the concept of information
namely crisp and fuzzy DTI. Both DTI are compared based gain as defined in (19) to (21) will be extended to the
on Generalization-Capability [15]. The result shows that following concept. Let S be a set consisting of s data
Fuzzy Decision Tree (FDT) is better than Crisp Decision samples. Suppose the class label attribute has m distinct
Tree (CDT) in providing numeric attribute classification. values, vi (for i=1,…, m), defining m distinct classes, Ci (for
Fuzzy Decision Tree formed by the FID3, combined with i=1,…, m). And also suppose there are n meaningful fuzzy
Fuzzy Clustering (to form a function member) and validated labels, Fj (for j=1,…, n) defined on m distinct values, vi .
cluster (to decide granularity) is also better than Pruned Fj(vi ) denotes membership degree of vi in the fuzzy set Fj .
Decision Tree. Here, Pruned Decision Tree is considered as Here, Fj (for j=1,…, n) is defined by satisfying the following
a Crisp enhancement [14]. Therefore in our research work, property:
disease track record analyzer application development, we
propose a kind of FDT using fuzzy approach. n
An information gain measure [1] is used in this research ∑ F (v ) = 1, ∀i ∈ {1,...m}
j
j i
to select the test attribute at each node in the tree. Such a
measure is referred to as an attribute selection measure or a Let βj be a weighted sample corresponding to Fj as given
m
measure of the goodness of split. The attribute with the by β j = ∑ det(C i ) × Fj (v i ) , where det(Ci ) is the number of
highest information gain (or greatest entropy reduction) is i
66 (IJCNS) International Journal of Computer and Network Security,
Vol. 1, No. 2, November 2009

elements in Ci . The expected information needed to classify Related to the proposed concept of FDT as discussed in
a given weighted sample is given by Section 4, the fuzzy association rule, Tk ⇒Fj can be
generated from the FDT. The confidence, support and
n correlation of Tk ⇒Fj are given by
I ( β1 , β 2 ,..., β n ) = −∑ p j log 2 ( p j ) (22) u m

j =1 ∑∑ min( F (v ), T (a j i k h )) × det(Ci ∩ Bh ) (27)


confidence (Tk ⇒ F j ) = h i
where pj is estimated by βj/s. u

Let attribute A have u distinct values, {a1, a2, …, au}, ∑ T (a


h
k h ) × det( Bh )

defining u distinct classes, Bh (for h=1,…, u). Suppose there


are r meaningful fuzzy labels, Tk (for k=1,…, r), defined on u m

A. Similarly, Tk is also satisfy the following property. ∑∑ min(F (v ),T (a )) × det(Cj i k h i ∩ Bh )


support(Tk ⇒ F j ) = h i (28)
s
r

∑ T (a ) = 1, ∀h ∈ {1,..., u}
k
k h
u m

∑∑ min( F (v ), T (a j i k h )) × det(Ci ∩ Bh ) (29)


correlation (Tk ⇒ F j ) = h i
If A was selected as the test attribute then these fuzzy u m

subsets would correspond to the braches grown from the ∑∑ F (v ) × T (a


h i
j i k h ) × det(Ci ∩ Bh )

node containing the set S. The entropy, or expected To provide a more generalized fuzzy multidimensional
information based on the partitioning into subsets by A, is association rules as proposed in [6], it is started from a
given by single table (relation) as a source of data representing
relation among item data. In general, R can be shown in
r
α1k + ... + α nk Table 1 (see Section 2).
E ( A) = ∑ I (α1k ,...,α nk ) (23) Now, we consider χ and ψ as subsets of fuzzy labels.
k =1 s
Simply, χ and ψ are called fuzzy datasets. A fuzzy dataset is
Where αjk be intersection between Fj and Tk defined on data a set of fuzzy data consisting of several distinct fuzzy labels,
sample S as follows. where each fuzzy label is represented by a fuzzy set on a
certain domain attribute. Formally, χ and ψ are given by
u m
χ={Fj|Fj∈Ω(Dj), ∃ j∈Nn} and ψ={Fj|Fj∈Ω(Dj), ∃ j∈Nn},
α jk = ∑∑ min( F j (vi ), Tk (ah )) × det(Ci ∩ Bh )
h i
where there are n domain data, and Ω(Dj) is a fuzzy power
(24) set of Dj. In other words, Fj is a fuzzy set on Dj. The
Similar to (4), I(αik,…, αnk) is defined as follows. confidence, support and correlation of χ ⇒ ψ are given by


n
I (α1k ,...,α nk ) = −∑ p jk log 2 ( p jk ) (25) inf {F j (d ij )}
F j ∈χ ∪ψ
j =1 support(χ ⇒ ψ ) = i =1
(30)
where pjk is estimated by αjk/s. s
Finally, the encoding information that would be gained by
s
branching on A is
∑ inf {F j ( d ij )}
F j ∈χ ∪ψ
Gain(A)=I(β1, β2,…, βn) – E(A) (26) confidence( χ ⇒ ψ ) = i =1
s
(31)
∑ infχ{F (d
i =1
Fj ∈
j ij )}
Since fuzzy sets are considered as a generalization of crisp
set, it can be proved that the equations (22) to (26) are also
s
generalization of equations (19) to (21).
∑ inf {F j (d ij )}
F j ∈χ ∪ψ (32)
correlation ( χ ⇒ ψ ) = s
i =1
5. Mining Fuzzy Association Rules from FDT
∑ infχ{ A (d
i =1
A j∈
j ij )} × inf {Bk (d ik )}
Bk ∈ψ
Association rules are kind of patterns representing
correlation of attribute-value (items) in a given set of data Here (30), (31) and (32) are correlated to (16), (17) and
provided by a process of data mining system. Generally, (18), respectively.
association rule is a conditional statement (such kind of if-
then rule). Performance or interestingness of an association 6. FDT Algorithms and Results
rule is generally determined by three factors, namely The research is conducted based on the Software
confidence, support and correlation factors. Confidence is a Development Life cycle method. The application design
measure of certainty to assess the validity of the rule. The conceptual framework is shown in Figure 1. An input for
support of an association rule refers to the percentage of developed application is a single table that is produced by
relevant data tuples (or transactions) for which the pattern denormalization process from a relational database. The
of the rule is true. Correlation factor is another kind of main algorithm for mining association rule process, i.e.
measures to evaluate correlation between two entities.
(IJCNS) International Journal of Computer and Network Security, 67
Vol. 1, No. 2, November 2009

Decision Tree Induction, is shown in Figure 4.

For i=0 to the total level


Check whether the level had already split
If the level has not yet split Then
Check whether the level can still be split
If the level can still be split Then
Call the procedure to calculate information gain
Select a field with the highest information gain
Get a distinct value of the selected field
Check the total distinct value
If the distinct value is equal to one Then
Create a node with a label from the value name
Else
Check the total fields that are potential to become
a current test attribute
If no field can be a current test attribute Then
Create a node with label from the majority
value name Figure 6. The generated decision tree
Else
Create a node with label from the selected
value name In this research, we implement two data types as a fuzzy set,
End If namely alphanumeric and numeric. An example of
End If alphanumeric data type is disease. We can define some
End If meaningful fuzzy labels of disease, such as poor disease,
End If moderate disease, and severe disease. Every fuzzy label is
End for represented by a given fuzzy set. The age of patients is an
Save the input create tree activity into database example of numeric data type. Age may have some
Figure 4. The generating decision tree algorithm meaningful fuzzy labels such as young and old. Figure 6
はurthermore, the procedure for calculating information shows an example result of FDT applied into three domains
gain, to implementing equation (22), (23), (24), (25) and (attributes) data, namely Death, Age and Disease.
(26), is shown in Figure 5. Based on the highest information
gain the application can develop decision tree in which the
user can display or print it. The rules can be generated from 7. Conclusion
the generated decision tree. Equation (27), (28) and (29) are The paper discussed and proposed a method to extend the
used to calculate the interestingness or performance of every concept of Decision Tree Induction using fuzzy value. Some
rule. The number of rules can be reduced based on their generalized formulas to calculate information gain ware
degree of support, confidence and correlation compared to introduced. In the process of mining fuzzy association rules,
the minimum value of support, confidence and correlation some equations ware proposed to calculate support,
determined by user. confidence and correlation of a given association rules.
Finally, an algorithm was briefly given to show the process
Calculate gain for a field as a root how to generate FDT.
Count the number of distinct value field
For i=0 to the number of distinct value field
Count the number of distinct value root field Acknowledgment
For j=0 to the number of distinct value root field
Calculate the gain field using equation (4) and (8) This research was supported by research grant Hibah
End For Kompetensi (25/SP2H/PP/DP2M/V/2009) and Penelitian
Calculate entropy field using equation (5) Hibah Bersaing (110/SP2H/PP/DP2M/IV/2009) from
End For Indonesian Higher Education Directorate.
Calculate information gain field
Figure 5. The procedure to calculate information gain
References
[1] J. Han, M. Kamber, Data Mining: Concepts and
Techniques, The Morgan Kaufmann Series, 2001.
[2] G. J. Klir, B. Yuan, Fuzzy Sets and Fuzzy Logic:
Theory and Applications, New Jersey: Prentice Hall,
1995.
68 (IJCNS) International Journal of Computer and Network Security,
Vol. 1, No. 2, November 2009

[3] R. Intan, “An Algorithm for Generating Single


Dimensional Association Rules,”, Jurnal Informatika Authors Profile
Vol. 7, No. 1, May 2006.
[4] R. Intan, “A Proposal of Fuzzy Multidimensional Rolly Intan obtained his B.Eng. degree in
Association Rules,”, Jurnal Informatika Vol. 7 No. 2, computer engineering from Sepuluh
November 2006. Nopember Institute of Technology,
[5] R Intan, “A Proposal of an Algorithm for Generating Surabaya, Indonesia in 1991. Now, he is a
professor in the Department of Informatics
Fuzzy Association Rule Mining in Market Basket
Engineering at Petra Christian University,
Analysis,”, Proceeding of CIRAS (IEEE). Singapore, Surabaya, Indonesia. He received his M.A.
2005 in information science from International
[6] R. Intan, “Generating Multi Dimensional Association Christian University, Tokyo, Japan in
Rules Implying Fuzzy Valuse,”, The International 2000, and his Doctor of Engineering in
Multi-Conference of Engineers and Computer Scientist, Computer Science from Meiji University,
Hong Kong, 2006. Tokyo, Japan in 2003. His primary research interests are in data
[7] R. Intan, O. Y. Yuliana, “Fuzzy Decision Tree mining, intelligent information system, fuzzy set, rough set and
fuzzy measure theory.
Approach for Mining Fuzzy Association Rules,”, 16th
International Conference on Neural Information
Processing, in be appeared, 2009. Oviliani Yenty Yuliana is an associate
[8] O. P. Gunawan, Perancangan dan Pembuatan Aplikasi professor at the Department of Informatics
Data Mining dengan Konsep Fuzzy c-Covering untuk Engineering, Faculty of Industrial Technology,
Membantu Analisis Market Basket pada Swalayan X, Petra Christian University, Surabaya,
(in Indonesian) Final Project, 2004. Indonesia. She received her B.Eng. in
[9] L. A. Zadeh, “Fuzzy Sets and systems,” International Computer Engineering from Institut Teknologi
Journal of General Systems, Vol. 17, pp. 129-138, Sepuluh Nopember, Surabaya, Indonesia. Her
1990. Master of Science in Computer Information
System is obtained from Assumption
[10] R. Agrawal, T. Imielimski, A.N. Swami, “Mining
University, Bangkok, Thailand. Her research
Association Rules between Sets of Items in Large interests are database systems and data mining.
Database,”, Proccedings of ACM SIGMOD
International Conference Management of Data, ACM
Press, pp. 207-216, 1993. Andreas Handojo obtained his B.Eng.
[11] R. Agrawal, R. Srikant, “Fast Algorithms for Mining degree in electronic engineering from Petra
Association Rules in Large Databases,”, Proccedings of Christian University, Surabaya, Indonesia in
20th International Conference Very Large Databases, 1999. He received his master, in Information
Morgan Kaufman, pp. 487-499, 1994. Technology Management from Sepuluh
November Institute of Technology, Surabaya,
[12] H. V. Pesiwarissa, Perancangan dan Pembuatan
Indonesia, in 2007. Now, he is a lecturer in
Aplikasi Data Mining dalam Menganalisa Track the Department of Informatics Engineering at
Records Penyakit Pasien di DR.Haulussy Ambon Petra Christian
Menggunakan Fuzzy Association Rule Mining, (in University. His primary research interest are
Indonesian) Final Project, 2005. in data mining, business intelligent, strategic information system
[13] E.F. Codd, “A Relational Model of Data for Large plan, and computer network.
Shared Data Bank,”, Communication of the ACM
13(6), pp. 377-387, 1970.
[14] H. Benbrahim, B. Amine, “A Comparative Study of
Pruned Decision Trees and Fuzzy Decision Trees,”,
Proceedings of 19th International Conference of the
North American, Atlanta, pp. 227-231, 2000.
[15] Y. D. So, J. Sun, X. Z. Wang, “An Initial comparison
of Generalization-Capability between Crisp and fuzzy
Decision Trees,”, Proceedings of the First International
Conference on Machine Learning and Cybernetics, pp.
1846-1851, 2002.
[16] ALICE d'ISoft v.6.0 demonstration [Online]. Available
at:http://www.alice-soft.com/demo/al6demo.htm
[Accessed: 31 October 2007].
[17] Khoshgoftaar Taghi M., Y. Liu, N. Seliya “Genetic
Programming-Based Decision Trees for Software
Quality Classification,”, Proceedings of the 15th IEEE
International Conference on Tools with Artificial
Intelligence, California, pp. 374-383, 2003.
(IJCNS) International Journal of Computer and Network Security, 69
Vol. 1, No. 2, November 2009

A Collaborative Framework for Human-Agent


Systems
Moamin Ahmed1, Mohd Sharifuddin Ahmad2 and Mohd Zaliman Mohd Yusoff3
1
College of Information Technology, Universiti Tenaga Nasional,
Km 7 Jalan Kajang-Puchong, 43009 Kajang, Selangor, Malaysia
momen42@yahoo.com

2
College of Information Technology, Universiti Tenaga Nasional,
Km 7 Jalan Kajang-Puchong, 43009 Kajang, Selangor, Malaysia
sharif@uniten.edu.my

3
College of Information Technology, Universiti Tenaga Nasional,
Km 7 Jalan Kajang-Puchong, 43009 Kajang, Selangor, Malaysia
zaliman@uniten.edu.my

the agents to take over the timing and execution of


Abstract: In this paper, we demonstrate the use of software
agents in assisting humans to comply with the deadlines of a communication from humans. However, the important tasks,
collaborative work process. Software agents take over the i.e. preparation and moderation tasks are still performed by
communication between agents and reminding and alerting humans. The agents continuously urge human actors to
humans in complying with scheduled tasks. We use the FIPA complete the tasks by the deadline and execute
agent communication protocol to implement communication communicative acts to other agents when the tasks are
between agents. An interface for each agent provides the means
completed.
for humans to communicate with their agents and to delegate
mundane tasks to them. This paper reports an extension to our previous work in
Keywords: intelligent software agents, multiagent systems, the same project [1]. Section 2 of this paper briefly dwells
workflow, collaboration. on the issues and problems relating to the EPMP. Section 3
reviews the related work on this project. In Section 4, we
1. Introduction develop and present our framework to resolve the problems
of EPMP. Section 5 discusses the development and testing
In a human-centric collaboration, the problem of adhering
of the system and Section 6 concludes the paper.
to deadlines presents a major problem. The diversity of tasks
imposed on humans and the procedures attached to them
pose a major challenge in keeping the time to implement 2. Issues and Problems in EPMP
scheduled tasks. One way of overcoming this problem is to
The EPMP is the standard process of our faculty for
use a scheduler or a time management system which keeps
examination paper preparation and moderation. The process
track of deadlines and provides reminders for time-critical
starts when the Examination Committee (EC) sends out an
tasks. Other researchers have developed agent-based
instruction to start prepare examination papers. A Lecturer
solutions to resolve similar problems in workflow systems
then prepares the examination paper, together with the
[18], [19], [20], [21]. However, such systems do not always
solutions and the marking scheme (Set A). Upon
provide the needed assistance to perform mundane follow-
completion, he then submits the set to be checked by an
up tasks and resolve delays caused by humans. In this paper,
appointed Moderator.
we demonstrate the development and application of software
The Moderator checks the set and returns it to the
agents to implement a collaborative work of Examination
Lecturer with a moderation report (Set B). If there are no
Paper Preparation and Moderation Process (EPMP) in our
corrections, the Lecturer submits the set to the Examination
academic faculty. We use the FIPA agent communication
Committee for further actions. Otherwise, the Lecturer
language (ACL) to implement communication between
needs to correct the paper and resubmit the corrected paper
agents [3], [4]. An interface for each agent provides a
to the Moderator for inspection. If corrections have been
convenient means for humans to delegate mundane tasks to
made, the Moderator returns the set to the Lecturer. Finally,
software agents. The use of such interface and the
the Lecturer submits set to the Committee for further
subsequent communication performed by agents and
processing. Figure 1 shows the process flow for the EPMP.
between agents contribute to the achievement of a shared
goal, i.e. the completion of the examination paper The Lecturer and Moderator are given deadlines to
preparation and moderation process within the stipulated complete the process as shown in Table 1. The process
time. continues over a period of four weeks in two preparation-
We use the FIPA ACL to demonstrate the usefulness of moderation-correction cycles.
70 (IJCNS) International Journal of Computer and Network Security,
Vol. 1, No. 2, November 2009

Perrault [18] view a conversation as a sequence of actions


performed by the participants, intentionally affecting each
other's model of the world, primarily their beliefs and goals.
While KQML and FIPA ACL epitomize agent
communication, many researchers have developed other
techniques of agent communication. Payne et al. [17]
propose a shallow parsing mechanism that provides message
templates for use in message construction. This approach
alleviates the constraint for a common ACL between agents
and support communication between open multiagent
systems. Chen and Su [2] develop Agent Gateway which
translates agent communication messages from one
multiagent system to an XML-based intermediate message.
This message is then translated to messages for other
multiagent systems.

Figure 1. The EPMP Process Flow Pasquier and Chaib-draa [16] offer the cognitive
coherence theory to agent communication pragmatic. The
theory is proposed as a new layer above classical cognitive
Table 1: Typical Schedule for Examination Paper agent architecture and supplies theoretical and practical
Preparation and Moderation elements for automating agent communication.
Tasks Deadlines
Set A should be submitted to the
Week 10 3.2 Workflow Systems
respective moderators
1st moderation cycle Week 10 & 11
Software agents have also been applied in workflow systems
nd
to resolve some specific issues. Many business processes use
2 moderation cycle (Set B) Week 12 & 13
workflow systems to exploit their known benefits such as
Set B should be submitted to EC Week 14 automation, co-ordination and collaboration between
entities. Savarimuthu et al. [19] and Fluerke et al. [7]
Lack of enforcement and the diverse tasks of lecturers and describe the advantages of their agent-based framework
moderators caused the EPMP to suffer from delays in action JBees, such as distribution, flexibility and ability to
by the academicians. Lecturers wait until the last few days dynamically incorporate a new process model. Researches
of the second cycle to submit their examination papers, have also been made on the monitoring and controlling of
which leaves insufficient time for the moderators to workflow [19]. Wang and Wang [21], for example, propose
scrutinize the papers qualitatively. Due to the manual nature an agent-based monitoring in their workflow system.
of the process, there are no mechanisms which record the Our framework extends the capabilities of these systems
adherence to deadlines and track the activities of defaulters. by employing a mechanism that enforces and motivates
humans in the process loop to comply with the deadlines of
To resolve some of these problems, we resort to the use of scheduled tasks. We implement this mechanism by
software agents to take over the communication tasks establishing a merit and demerit point system which rate
between agents and the reminding and alerting tasks human’s compliance to deadlines.
directed to humans. We will describe more of these
functions in greater details in Section 4.3. 3.3 Ontology
The term ontology was first used to describe the
3. Related Work philosophical study of the nature and organization of reality
[11, 12]. In AI it is simply defined as “an explicit
3.1 Agents and Agent Communication Language specification of a conceptualization” [10]. This definition
The development of our system is based on the work of provokes many controversies within the AI community
many researchers in agent-based systems. For example, especially with regard to the meaning of conceptualization.
agent communication and its semantics have been An ontology associates vocabulary terms with entities
established by research in speech act theory [14], [20], identified in the conceptualization and provides definitions
KQML [3], [14] and FIPA ACL [4], [5], [9]. We based our to constrain the interpretations of these terms.
design of agent communication on the standard agent Most researchers concede that an ontology must include a
communication protocol of FIPA [4], [5] and its semantics vocabulary and corresponding definitions, but there is no
[6]. FIPA ACL is consistent with the mentalistic notion of consensus on a more detailed characterization [13].
agents in that the message is intended to communicate Typically, the vocabulary includes terms for classes and
attitudes about information such as beliefs, goals, etc. Belief, relations, while the definitions of these terms may be
Desire, and Intention (BDI) is a mature and commonly informal text, or may be specified using a formal language
adopted architecture for intelligent agents [12]. FIPA ACL like predicate logic as implemented in [8].
message use BDI to define their semantics [6]. Cohen and
(IJCNS) International Journal of Computer and Network Security, 71
Vol. 1, No. 2, November 2009

FIPA ontology uses a specification of a representational and tracked by the agents in their environment
vocabulary for a shared domain of discourse involving
definitions of classes, relations, functions, and other objects
[5].

4. The Collaborative Framework


We develop our framework based on a four-phased cycle
shown in Figure 2. The development process includes
domain selection, domain analysis, tasks and message
exchanges and application.

Figure 3. The Model’s Architecture

4.3 Tasks and Message Exchanges


An agent sends a message autonomously when some states
of the environment are true. It performs the following
actions to complete the message-sending task (See Figure
4):
Figure 2. The Four-Phased Development Cycle

4.1 Domain Selection


Our framework involves a working relationship between an
agent and its human counterpart. Considering the nature of
the tasks and the complexity of the work process, the EPMP
seems to be a suitable platform on which to develop a
multiagent framework.
The mundane tasks of document submissions, deadlines
reminding and work progress tracking could be delegated to
software agents. Consequently, we chose the EPMP as a
platform for our framework that contains both humans and
agents. The goal of this collaborative process is to complete
the preparation and moderation of examination papers.

4.2 Domain Analysis


Domain analysis consists of analyzing the process flow,
identifying the entities and modeling the process. We have
described and analyzed the process in Section 2 and will not
repeat it here. For the purpose of our model, we create three
agents that represent the Examination Committee (C), Figure 4. Agent Actions
Moderator (M) and Lecturer (L). Figure 3 shows the
architecture of our model. 4.3.1 Check the state of the Environment
Humans communicate with their agents via an interface The agent always checks its environment, which consists of
and their corresponding agents monitor and update their four parts:
environment to communicate between agents, perform tasks • Status of uploaded files: The agent checks its user if he
that enables the progression of the workflow, and reminding has uploaded Set A or Set B to a specified folder. If he
and alerting their human counterparts to meet the deadlines. has done so, the agent checks the next step.
With this model, important human activities are recorded • Status of deadlines: The agent checks the system’s date
72 (IJCNS) International Journal of Computer and Network Security,
Vol. 1, No. 2, November 2009

everyday and compare it with the deadline. decision as well as improved ability to implement one-to-
• Status of subprograms: When an agent perform a task, many and many-to-many message exchanges, e.g.
it records the actions in a subprogram, e.g. when the inform_all message.
Committee agent sends the Prepare message, it records
Based on the analysis of Section 4.2, we create the
this event to use it later for sending a remind message.
interaction sequence between the agents. However, due to
• Message signal: The agent opens a port and makes a
space limitation and the complexity of the ensuing
connection when it senses a message coming from a
interactions, we only show sample interactions between the
remote agent.
Committee (C) and the Lecturer (L) agents:
4.3.2 Send messages
1. Agent C
The agent decides to send a message when some status of CN1C : Agent C opens port and enables connection
the environment are true, otherwise it will inform all agents when a start date is satisfied.
of any delays from its user and penalizes its user with E1C : C sends a message µ1CL, to L – PREPARE
demerit points. When sending a message, it performs the examination paper.
following actions: - Agent L sends an ACK message, µ1LC.
• Open Port (Connect): When the agent decides to send a - Agent C reads the ACK, checks the ontology
message, it opens a port and makes a connection. and understands its meaning.
• Send Message (online or offline): The message will be - If ACK message is not received, it sends
received when the remote agent is online. Two problems offline message.
may occur: (i) The remote agent is offline; (ii) The IP T1C : Agent C registers the action and the date.
T2C : Agent C calculates the merit or demerit point
address of the remote agent is inadvertently changed. We
and saves it for Head of Department’s
resolve these problems by exploiting the Acknowledge
evaluation.
performative. If the sending agent does not receive an
DCN1C : Agent C disables connection and closes the
Acknowledge message from a remote agent, it will port.
resend the message in offline mode. The same process is When Agent C decides to send a remind message it will
executed if the IP address is changed. We focus on these perform the following:
issues to ensure that the agents must achieve the goal in
any circumstances because it relates to completing the CN2C : Agent C connects to Agent L
students examination papers. - Agent C makes this decision by checking its
• Register action, date and merit/demerit point: When environment (the date, status of uploaded
file and notice in subprograms).
the agent has sent the message, it registers the action
E2C : C sends a REMIND message to Agent L
and the date in a text file. It also evaluates the user by
- Agent L receives the message and display on
giving merit or demerit points based on the user’s
its screen to alert its human counterpart.
adherence to any deadlines. The Head of Department DCN2C : Agent C disconnects and closes the port when
could access these points to evaluate the staff’s it completes the task.
commitment to EPMP and take the necessary corrective 2. Agent L
action. CN1L : Agent L opens port and enables connection
• Record in Subprograms: The agent records some when it receives the message from Agent C.
actions as subprograms when it needs to execute those - Agent L makes this decision by checking its
actions later. environment (message signal).
• Close Port (Disconnect): The agent disconnects and - Agent L reads the performative PREPARE,
closes the port when it has successfully sent the message. checks the ontology and understands its
meaning.
4.4 Autonomous Collaborative Agents Application E1L : Agent L replies with a message µ1LC, to C –
We then apply the task and message exchanges to the EPMP ACK.
domain. To facilitate readability, we represent the tasks and T1L : Agent L displays the message µ1CL, on the
message exchanges for each agent as T#X and E#X screen to alert its human counterpart.
respectively, where # is the task or message exchange T2L : Agent L opens and displays a new Word
number and X refers to the agents C, M, or L. A message document on the screen.
from an agent is represented by µ#SR, where # is the message - Agent L opens a new document to signal its
number, S is the sender of the message µ, and R is the human counterpart to start writing the
receiver. S and R refer to the agents C, M, or L. For examination paper.
system’s tasks, CN#X refers to the task an agent performs to T3L : Agent L opens and displays the Word
enable connection to a port and DCN#X indicates a document of the Lecturer form on the screen.
disconnection task. - Agent L opens the form which contains the
We extend the state of the environment to include policy to follow.
systems’ parameters that enable agents to closely monitor DCN1L : Agent L disconnects and closes the port.
the actions of its human counterpart. The side effect of this When the human Lecturer uploads a completed
ability is improved autonomy for agents to make correct examination paper via an interface, agent L checks its
(IJCNS) International Journal of Computer and Network Security, 73
Vol. 1, No. 2, November 2009

environment (status of uploaded file and the deadline). committed to carry out. The agent’s intention results
Agent L will decide to send a message: from its belief and a goal to achieve. Consequently, the
agents will take actions such as sending Prepare
E2L : Agent L sends a message µ2LM, to M –
message, remind message, etc.
REVIEW examination paper.
We show four samples of performatives used in the
- Agent M sends an ACK message, µ1ML.
- Agent L checks the states of the framework (Prepare, ACK, Review, Remind). The
environment. communicative act definitions for each of these
T4L : Agent L registers the action and the date. performatives are as follows:
T5L : Agent L calculates and saves the merit or • Prepare: The sender advises the receiver to start prepare
demerit points. examination paper by performing some actions to enable
its human counterpart to do so. The content of the
5. Systems Simulation and Testing message is a description of the action to be performed.
We simulate the EPMP using Win-Prolog and its extended The receiver understands the message and is capable of
module Chimera, which has the ability to handle multiagent performing the action. Prepare performative is time-
systems [22]. We use Prolog for two reasons: Firstly, Prolog dependent.
is well suited for expressing complex ideas because it
prepare(
focuses on the computation’s logic rather than its mechanics
':sender', committee,
where the drudgery of memory allocation, stack pointers,
':receiver', lecturer,
etc., is left to the computational engine. Reduced drudgery
':reply-with', task_completed,
and compact expression means that one can concentrate on
':content', start_prepare_examination_paper,
what should be represented and how. Secondly, since Prolog
':ontology', word_documents,
incorporates logical inferencing mechanism, this powerful
':language', prolog )
property can be exploited to develop inference engines
specific to a particular domain.
Chimera provides the module to implement peer-to-peer • ACK: The receiver acknowledges the sender that it has
communication via the use of TCP/IP. Each agent is received the message.
identified by a port number and an IP address. Agents send We use acknowledge for message state. If the sender
and receive messages through such configurations. receives acknowledge, it means that receiver is online and
has received the message, otherwise the receiver is offline.
We develop the collaborative process as a multiagent The sender will resend the message in offline mode.
system of EPMP based on the above framework and test the
simulation in a laboratory environment on a Local Area The content of the message is a description of the action
Network. Each of the agents C, M and L run on a PC to be performed, which the receiver understands and is
connected to the network. The simulation executes capable of performing. ACK performative depends on the
communication based on the tasks outlined in Section 4.4. receiving message signal.
For message development, we use the parameters specified ack(
by the FIPA ACL Message Structure Specification [4]. We ':sender', committee,
include the performatives, the mandatory parameter, in all ':receiver', lecturer,
our ACL messages. We also define and use our own ':in-reply-to', task_completed,
performatives in the message structure, which are Prepare, ':content', acknowledge_message,
Check, Remind, Review, Complete, Modify, ACK, ':ontology', message,
Advertise, and Inform_all. To complete the structure, we ':language', prolog )
include the message, content and conversational control
parameters as stipulated by the FIPA Specification. • Review: The sender advises the receiver to review the
examination paper by performing some actions to enable
The communication between agents is based on the BDI its human counterpart to do so.
semantics as defined by FIPA [6]. The BDI semantics gives The content of the message is a description of the action
the agents the ability to know how it arranges the steps to to be performed, which the receiver understands and is
achieve the goal: capable of performing. Review performative depends on the
• Belief: When the agent wants to send a message, it deadline and status of uploaded file.
checks its belief of which agent can perform the required review(
action. ':sender', lecturer,
• Desire: Achieving the goal completely will be the desire ':receiver', moderator,
of all agents. The agents will never stop until it has ':reply-with', task_completed,
achieved the goal. The agent’s goal is to complete the ':content', review_examination_paper,
examination paper preparation and moderation and it ':ontology', word_documents,
will know this from Committee agent’s final message. ':language', prolog )
• Intention: Intentions are courses of action an agent has • Remind: The sender advises the receiver to perform a
74 (IJCNS) International Journal of Computer and Network Security,
Vol. 1, No. 2, November 2009

very important task, e.g. to submit Set A or Set B. examination_paper:-


Remind performative depends on the deadline, status of absolute_file_name( system(ole), File ),
uploaded file and notice in subprograms. ensure_loaded( File ),
ole_initialize,
remind( ole_create( word, 'word.application' ),
':sender', committee, ole_get_property( word, documents, [],
':receiver', lecturer, WordDocuments ),
':reply-with', task_completed, assert( my_object(word_documents,WordDocuments)
':content', remind_message, ),
':ontology', message, ole_put_property( word, visible, -1 ),
':language', prolog ) my_object( word_documents, WordDocuments ),
absolute_file_name( ('C:\database\Examination
We reproduce below the sample codes that implement a Paper.docx'), FileName ),
communicative act to inform the committee agent that the ole_function( WordDocuments, open, [FileName],
EPMP process has been completed: SecondDocument ),
complete_lecturer_dialog_handler :- assert(
agent_link( Agent, Link ), my_object(second_document,SecondDocument) ).
Complete =
complete( To test the collaborative system, we deploy human actors
':sender', lecturer, to perform the roles of Committee, Lecturer and Moderator.
':receiver', committee, These people communicate with their corresponding agents
':reply-with', task_completed, to advance the workflow. An interface for each agent
':content', complete_prepare_examination_paper, provides the communication between human actors and
':ontology', word_documents, agents (see Figure 5).
':language', prolog ), nl,

agent_post( Agent, Link, Complete),


nl,check_points,
open('c:\4',append),
tell('c:\4').

For ontology development, we implicitly encode our


ontologies in the actual software implementation of the
agent themselves and thus are not formally published as an
ontology service [5]. The sample codes below show the
ontology implementation after the Committee agent (C),
receives the Complete message from the Lecturer agent (L):
committee_handler(Name,Link,complete(|Args) ):- Figure 5. A Lecturer Agent Interface
committee_dialog_handler( (committee,1006),
msg_button, _, _ ), The test produces the following results: On the set date,
repeat, wait( 0 ), the Committee agent sends the Prepare message to the
( complete_prepare_examination_paper ), Lecturer agent. The Lecturer agent acknowledges the receipt
fipa_member( ':sender', From, Args ), of the message and then shows the message on the screen
fipa_member( ':reply-with', ReplyWith, Args ), for its human counterpart. It then opens a new Word
committee_reply( Name, From, ReplyWith, done, document for the examination paper and another document
Reply ), displaying the guidelines for preparing the examination
agent_post( Name, Link, Reply ), paper. While the human lecturer simulates the preparation
timer_create( clock3, clock_hook3 ), of the examination paper, the Committee agent sends a
timer_set( clock3,1000). reminder to the Lecturer agent which displays the reminder
on the screen for the human lecturer. When the human
% ontology call
lecturer uploads the completed examination paper with its
complete_prepare_examination_paper:-
user interface (see Fig. 5), its lecturer agent checks the date,
repeat, wait( 0 ),
calculates the merit/demerit points and sends the Review
committee_ acknowledge_remot_agent,
examination_paper, message to the Moderator agent.
committee_form, The Moderator agent acknowledges the receipt of the
committee_message. message, displays the message on the screen for its human
counterpart, and opens the examination paper and the
Due to space limitation, we will only show the ontology moderation form. While the human moderator simulates the
for examination paper. moderation of the examination paper, the Committee agent
(IJCNS) International Journal of Computer and Network Security, 75
Vol. 1, No. 2, November 2009

sends a reminder to the Moderator agent which displays the Table 2: Comparison between Manual and Automated
reminder on the screen for the human moderator. When the (Agent-based) Systems
human moderator uploads the completed moderation form Features Manual Automated
and the moderated examination paper with its user interface, Human cognitive load High Reduced
its agent checks the date, calculates the merit/demerit points Process tracking No Yes
and sends the Check message to the Lecturer agent. Merit/demerit system No Yes
Reminder/alerting No Yes
The Lecturer agent acknowledges the message, displays Offline messaging Not applicable Yes
the message on the screen for its human lecturer and opens Housekeeping Inconsistent Consistent
the moderated examination paper and the completed Document submission Human-dependent Immediate
moderation form. The human lecturer checks the Feedback Human-dependent Immediate
moderation form to know if there are corrections to be
made. In this test, we do not simulate any corrections. The 6. Conclusions and Further Work
human lecturer then uploads the moderation form and the In this research, we developed and simulated a collaborative
moderated examination paper. Its agent then checks the framework based on the communication between agents
date, calculates the merit/demerit points and sends a using the FIPA agent communication protocol. We
Complete message to the Committee agent. demonstrated the usefulness of the system to take over the
The Committee agent acknowledges the message, timing and execution of scheduled tasks from humans to
achieve a shared goal. The important tasks, i.e. preparation
displays the message on the screen for its human
and moderation tasks are still performed by humans. The
counterpart and opens the Committee form and the
agents perform communicative acts to other agents when the
moderated examination paper. The human committee then tasks are completed. Such acts help reduce the cognitive
uploads the moderated examination paper to the EC Print load of humans in performing scheduled tasks and improve
File. The Committee agent then sends an inform-all the collaborative process.
message to all agents that the EPMP process is completed. Our agents are collaborative and autonomous, but they are
This simulation shows that with the features and not learning agents. In our future work, we will explore and
autonomous actions performed by the agents, the incorporate machine learning capabilities to our agents. The
collaboration between human Committee, Lecturer and agents will learn from previous experiences and enhance the
Moderator improves significantly. The agents register dated EPMP process.
actions, remind humans about the deadlines, advertise all
agents if there is no submission when the deadline has References
expired, and award/penalize merit/demerit points to
humans. [1] Ahmed M., Ahmad M. S., Mohd Yusoff M. Z., A
review and development of Agent Communication
The human's cognitive load is reduced when the deadlines Language, Electronic Journal of Computer Science
of important tasks and documents' destinations are ignored. and Information Technology (eJCSIT), ISSN [1985-
This is alleviated by the consistent alerting services provided 7721], Vol. 1, No. 1, pp. 7 – 12, May 2009.
by the agents that ensure constant reminders of the [2] Chen J. J-Y., Su S-W., AgentGateway: A
deadlines. communication tool for multiagent systems,
Information Sciences, Vol. 150 Issues 3-4, pp 153 –
All these actions and events are recorded in the agent 154, 2003.
environment to keep track of the process flow, which [3] Finin T., Fritzson R., McKay D., McEntire R., KQML
enables the agents to resolve any impending problems. The as an Agent Communication Language, Proceedings
ease of uploading the files and the subsequent of the Third International Conference on Information
communicative acts performed by agents and between and Knowledge Management (CIKM '94), 1994.
agents contribute to the achievement of the shared goal, i.e. [4] FIPA ACL Message Structure Specification:
the completion of examination paper preparation and SC00061G, Dec. 2002.
moderation process. [5] FIPA Ontology Service Specification: XC00086D,
Aug. 2001
As such, we believe that the use of agent-based system [6] FIPA Communicative Act Library Specification
has provided some evidence that the problems of lack of SC00037J 2002/12/03.
enforcement, lack of reminder of time critical tasks and [7] Fleurke M., Ehrler L., Purvis M., JBees – An adaptive
delays in response suffered by the manual system have been and distributed framework for workflow systems,
addressed. Table 2 compares the features between the Proc. IEEE/WIC International Conference on
manual and the agent-based systems and highlights the Intelligent Agent Technology, 2003, Halifax, Canada.
improvements. [8] Fox M. S., Gruninger M., On Ontologies and
Enterprise Modelling, Enterprise Integration
Laboratory, Dept. of Mechanical & Industrial
Engineering, University of Toronto.
76 (IJCNS) International Journal of Computer and Network Security,
Vol. 1, No. 2, November 2009

[9] Genesereth M. R., Ketchpel S. P., Software agents, Authors Profile


Communication of the ACM, Vol.37, No.7, July
1994. Moamin A. Mahmoud received his B.Sc. in
[10] Gruber T. R., A Translation Approach to Portable Mathematics from the College of
Ontologies, Knowledge Acquisition, 5(2):199–220, Mathematics and Computer Science,
1993. University of Mosul, Iraq in 2008. Currently,
he is enrolled in the Master of Information
[11] Guarino N., Giaretta P., Ontologies and Knowledge
Technology program at the College of
Bases: Towards a Terminological Clarification. In N. Graduate Studies, Universiti Tenaga
Mars, Editor, Towards Very Large Knowledge Bases: Nasional (UNITEN), Malaysia. During his
Knowledge Building and Knowledge Sharing, pages studentship at UNITEN, he conducted
25–32. IOS Press, Amsterdam, 1995. additional laboratory work for degree students at the College of
[12] Guerra-Hernandez A., El Fallah-Seghrouchni A., Information Technology. His current research interests include
Soldano H., Learning in BDI Multi-agent Systems, software agents and multiagent systems.
Universite Paris, Institut Galilee.
[13] Heflin J. D., Towards The Semantic Web: Knowledge Mohd S. Ahmad received his B.Sc. in
Representation in a Dynamic Distributed Electrical and Electronic Engineering from
Brighton Polytechnic, UK in 1980. He
Environment, PhD Dissertation, 2001.
started his career as a power plant engineer
[14] Labrou Y., Finin T., Semantics for an Agent specialising in Process Instrumentation and
Communication Language, PhD Dissertation, Control in 1980. After completing his MSc
University of Maryland, 1996. in Artificial Intelligence from Cranfield
[15] Muehlen M. Z., Rosemann M., Workflow-based University, UK in 1995, he joined UNITEN
process monitoring and controlling – technical and as a Principal Lecturer and Head of Dept. of
organizational issues. Proc. 33rd Hawaii International Computer Science and Information Technology. He obtained his
Conference on System Sciences, 2000, Wailea, IEEE PhD from Imperial College, London, UK in 2005. He has been an
Press. associate professor at UNITEN since 2006. His research interests
includes applying constraints to develop collaborative frameworks
[16] Pasqueir P., Chaib-draa B., Agent communication
in multi-agent systems, collaborative interactions in multi-agent
pragmatics: The Cognitive Coherence Approach, systems and tacit knowledge management using AI techniques.
Cognitive Systems Research, Vol. 6 Issue 4, pp 364 –
395, 2005. Mohd Z. M. Yusoff obtained his BSc and
[17] Payne T. R., Paolucci M., Singh R., Sycara K., MSC in Computer Science from Universiti
Communicating agents in open multiagent systems, Kebangsaan Malaysia in 1996 and 1998
First GSFC/JPL Workshop on Radical Agent respectively. He started his career as a
Concepts (WRAC), 2002. Lecturer at UNITEN in 1998 and has been
[18] Perrault C. R., Cohen P. R., Overview of planning appointed as a Principle Lecturer at UNITEN
since 2008. His has produced and presented
speech Acts, Dept. of Computer Science University of
more than 40 papers for local and international conferences. His
Toronto. research interest includes modeling and applying emotions in
[19] Savarimuthu B. T. R., Purvis M., Fleurke M., various domains including educational systems and software
Monitoring and controlling of a multiagent based agents, modeling trust in computer forensic and integrating agent
workflow system. In Proc. Australasian Workshop on in knowledge discovery system.
Data Mining and Web Intelligence (DMWI2004),
Dunedin, New Zealand. CRPIT, 32. Purvis, M.,
Ed.ACS. 127-132.
[20] Searle J. R., Kiefer F., Bierwisch M. (Eds.): Speech
act theory and pragmatics, Springer, 1980.
[21] Wang M., Wang H., Intelligent agent supported
workflow monitoring system, CAISE 2002, LNCS
2348, 787-791.
[22] http://www.lpa.co.uk/chi.htm.
(IJCNS) International Journal of Computer and Network Security, 77
Vol. 1, No. 2, November 2009

Modified Feistel Cipher Involving Interlacing


and Decomposition
K.Anup Kumar1 and V.U.K. Sastry2
1
Associate Professor, Department of Computer Science and Engineering, SNIST,
Hyderabad, Andhra Pradesh, India.
1
k_anupkumar@yahoo.com
2
Dean R & D, Department of Computer Science and Engineering, SNIST,
Hyderabad, Andhra Pradesh, India.
2
vuk_sastry@rediffmail.com

section 4. We have illustrated the cipher in section 5 and


Abstract: In this paper, we have discussed the generation of
large block cipher of 256 bit by using the modified feistel
investigated the cryptanalytic attack on cipher in section 6.
structure involving basic concepts of interlacing, decomposition In section 6.3, we have discussed the avalanche effect which
and key based random permutations. In each round, we perform is followed by the conclusion in section 7 and reference in
decomposition before encryption and interlacing after section 8.
encryption. The key based random permutations and
substitutions used in this process are similar to the one we 2. Interlacing and Decomposition
already published in our previous paper. The cryptanalysis
carried out in this paper, indicates that the cipher cannot be Let us illustrate the process of decomposition first. Let ‘P’
broken by any cryptanalytic attack due to the non linearity be the plaintext of length 256 bit. Let us divide this plaintext
induced by the interlacing, decomposition and key based of 256 bit block into four small blocks of 64 bits each.
random permutations. Let C0 = P be the initial plaintext. Thus we get,
Keywords: Encryption, Decryption, Plaintext, Cipher text, B01, B02, B03, B04 as 64 bits blocks by placing the first 64 bits
Key, Interlacing, Decomposition etc. of ‘C0’ in ‘B01’ and the next 64 bits of ‘C0’ in ‘B02’ and so
on.
1. Introduction Hence,
Ck = Σ Bki , j . Such that, i = 1 to 4 and j = 1 to 64. k = 0 to
In the survey of literature of cryptography, Feistel 16; Where, k = 0 indicates initial plaintext, k = m indicates
structure has a predominant role in generating the block cipher text after mth round and Σ indicates concatenation of
cipher of required size. Here, the bits of the plaintext bits.
undergo a series of diffusion and confusion transformations Let C0 = { C01, C02, C03,………, C0256 }. Then,
involving permutations, substitutions. The classical feistel Bmi = Σ Cmj + k . Where, i = 1 to 4 , j = 1 to 64
structure involves a round function and the number of and k = 64*( i - 1 ). therefore,
rounds which provides good strength to the cipher is
sixteen. Bm1 = { Cm1, Cm2, Cm3,……., Cm64 } (2.1)
Bm2 = { Cm65, Cm66, Cm67,……., Cm128 } (2.2)
In this paper, we have developed a block cipher of 256 Bm3 = { Cm129, Cm130, Cm131,……., Cm192 } (2.3)
bit, using 16 rounds of classical feistel structure. In the Bm4 = { Cm193, Cm194, Cm195,……., Cm256 } (2.4)
process of encryption and decryption, we have used the
function ‘F’ in each round same as our conventional feistel We perform decomposition before encryption. So that, a
structure with key based random permutations and large block of 256 bit is divided into a small block of 64 bit.
substitutions published in our previous paper, see reference Hence encryption of these small blocks can be done in
[6].To get proper mixing of bits between two consecutive parallel and faster. Moreover, decomposition allows us to
rounds; to introduce the non linearity and counter attack the introduce enough confusion in a large block cipher due to
cryptanalysis, we have used the concepts of interlacing and which the desired avalanche effect is maintained. See (6.3).
decomposition. Our interest is to develop a block cipher
using feistel network which cannot be broken by any Now let us illustrate the process of interlacing.
cryptanalytic attack. We perform interlacing after encryption is performed on
small blocks Bm1, Bm2, Bm3, Bm4.
In section 2 of this paper, we introduce the process of Let cm+11, Cm+12, Cm+13, cm+14 be the corresponding ciphers
interlacing and decomposition in feistel network followed by obtained after encryption.
the process of interlacing and decomposition demonstrated Let ‘Ci’ be the 256 bit cipher obtained after interlacing the
in figure. In section 3, we discuss the development of cipher ciphers cm+11, cm+12, cm+13, cm+14. Here ‘i’ indicates the round
and we present the algorithms for encryption, after which interlacing is performed and i=m+1. In the
decryption, Let ‘Ci’ be the 256 bit cipher obtained after process of interlacing, we take the first bit of ‘cm+11’ and
interlacing the ciphers cm+11, cm+12, cm+13, cm+14. Here ‘i’ place it as the first bit of Ci, next we take the first bit of
indicates the round interlacing and decomposition in
78 (IJCNS) International Journal of Computer and Network Security,
Vol. 1, No. 2, November 2009

‘cm+12’ and place it as the second bit of Ci, and similarly the As we use four different blocks B1, B2, B3, B4 of 64 bit each
first bit of ‘cm+13’ and ‘cm+14’ are placed as the third and for encryption, by using required transformations on k1, k2,
fourth bit of Ci. This process is continued till all the bits of k3 and k4 published in our previous paper, see reference [6].
cm+11, cm+12, cm+13, cm+14 are combined into Ci.
Therefore The following is the process proposed for using interlacing
and decomposition during encryption/decryption in feistel
Ci = { c1,1, c2,1, c3,1, c4,1, c1,2, c2,2, c3,2, c4,2, ……., c1,64, c2,64, structure.
c3,64,c4,64 } (2.5)
Plaintext C0 of 256 bit
Thus, The process of interlacing allows us to mix the bits
thoroughly before beginning the next round. Interlacing and
Decompose the plaintext
decomposition enables us in performing variable
permutations and substitutions on bits in each round. Round 1
The following figures explain how interlacing and
decomposition are used. F F F F

Decomposition
Interlacing

C1 …... C64 C65 …. C128C129 …..C192 C193 ….. C256 Round 2


Decompose

F F F F

B1 B2 B3 B4 Interlacing

Decompose
64 bit blocks B1,B2,B3,B4 obtained after Decomposition
F F F F

Interlacing : : : : :
: : : :
C1 C2 C3 C4 : : :

c1 …. c64 c1 … c64 c1 ….. c64 c1 .... c64 F F F F

…… Interlacing

c1,1 c2,1 c3,1 c4,1 c1,2 c2,2 c3,2 c4,2 … c1,64 c2,64 c3,64 c4,64 Round 15
Decompose
Cipher text Ci of 256 bits after Interlacing. F F F F

3. Development of Cipher
Interlacing
Let us consider a block of plaintext ‘P’ consisting of 32
Round 16
characters. By using the EBCDIC code, each character can Decompose
be represented in terms of 8 bits.
F F F F
Then the entire plaintext of 32 characters yields us a block
containing 256 bits.
Let this initial plaintext be represented as C0. Interlacing
16
Cipher text C of 256 bit.
Let the key ‘K’ contain 16 integers, then the 8 bit binary
representation of these integers yields us a block containing Encryption involving interlacing and decomposition
128 bits. Let this block be denoted as ‘k’.
Note: permutations, substitutions and key generation during
Let the first 32 bits of ‘k’ be treated as k1. encryption and reverse permutations and
substitutions and key generations during decryption are
The next 32 bits of ‘k’ be treated as k2. discussed in our paper published earlier. See reference [6].
Similarly, we get two more keys ‘k3’ and ‘k4’.
(IJCNS) International Journal of Computer and Network Security, 79
Vol. 1, No. 2, November 2009

Ciphertext C
16
of 256 bit
< Cm > indicates decomposition.

Decompose the ciphertext


In the first round, encryption is done in the following way.
We perform the required transformations on k1, k2, k3, and
Round 16 k4 to get krn1, krn2, krn3, krn4.

F F F F Cni = Fkrni ( Bmi ); i = 1 to 4 indicates ith block.

‘F’ indicates encryption and krni indicates the round key


Interlacing for ‘nth’ round on ith block and n = m+1.

Round 15 After encryption in nth round, we get ciphertext as four


Decompose
blocks Cn1, Cn2, Cn3, Cn4.
F F F F
Next we perform interlacing after encryption.
Cn = > Cni < ;
Interlacing
Here i = 1 to 4 , indicates the cipher block.
Decompose n = 1 to 16. indicates the round after which interlacing is
performed.
F F F F > Cni < , represents interlacing.

: : : : Similarly, during decryption, we proceed in the same


: : : : way as discussed above, performing reverse transformations
: : : : on key. See reference [6] for reverse transformations used.

4. Algorithms
F F F F
4. 1 Algorithm for Encryption

Interlacing BEGIN
Round 2

Decompose C0 = P // initialize 256 bits plaintext

F F F F for i = 1 to 16
{
for j = 1 to 4
Interlacing {
Round 1

Decompose Bi -1 j = < Ci -1 > // Decompose


F F F F
}
for j = 1 to 4
{
Interlacing

Plain text of 256 bit. Cij = Fkri j ( Bi-1j ) // Encryption


Decryption involving interlacing and decomposition
}
We generate the keys for respective rounds denoted as krm1, for j = 1 to 4
krm2, krm3, krm4. Such that if krmi is the round key, then ‘i’ {
indicates the block and ‘m’ indicates the round. Ci = > Ci j < // Interlace
}
The initial plaintext of 256 bits is represented as C0 }
Decompose C0 into four blocks of 64 bits each. This can be END
represented as B01, B02, B03, and B04.
Therefore,

Bmi = < Cm > where, ‘m’ indicates the round after which 4. 2 Algorithm for Decryption
decomposition is performed, ‘i’ indicates the block
number; i = 1 to 4 and BEGIN
80 (IJCNS) International Journal of Computer and Network Security,
Vol. 1, No. 2, November 2009

4. 4 Algorithm for Interlacing


C16 = cipher text // initialize 256 bits cipher text
for i = 16 to 1 BEGIN
{
for j = 1 to 4 > Ci-1j <
{
Bij = < Ci > // Decompose {
} for n = 1 to 64
for j = 1 to 4 {
{ Ci-1 [( j-1)*64 + n] = Ci-1j [ n ]
Ci-1j = Fkrij ( Bij ) // Encryption }
}
for j = 1 to 4 }
{
Ci-1 = > Ci-1j < // Interlace END
}
} 5. Illustration Of Cipher
END Consider the plaintext P = { O Lord, Please save me from
evil }. Let the key K = { 155, 23 , 59, 3, 111, 26, 91, 36,
4. 3 Algorithm for Decomposition 77, 148, 87, 59, 118, 2, 65, 181 }.

BEGIN Now the 8 bit binary representation of plaintext P and key K


is as follows.
< Ci-1 > // during ith round Initial plaintext C0 = P.
{
j =1 01001111001000000100110001101111011100100110
for n = 1 to 256 01000010110000100000 01010000011011000110010
{ 10110000101110011011001010010000001110011011
if ( n <= 64 ) 00001011101100110010100100000011011010110010
{ 10010000011001100111001001101111011011010010
B i-1j [n] = Ci-1[n] 000001100101011101100110100101101100 (5.1)
j=j+1
} Initial key k is
10011011000101110011101100000011011011110001
else if ( ( 64 > n ) and ( n <= 128 ) ) 10100101101100100100010011011001010001010111
{ 0011101101110110000000100100000110110101(5.2)
B i-1j [n] = Ci-1[n]
j= j+1 Let the plaintext be decomposed into B01, B02, B03, B04.
} Then the respective 64 bit blocks after decomposition are as
follows.
else if ( ( 128 > n ) and ( n <= 192 ) )
{ 01001111001000000100110001101111011100100110
B i-1j [n] = Ci-1[n] 01000010110000100000. (5.3)
j= j+1
} 01010000011011000110010101100001011100110110
01010010000001110011. (5.4)
else if ( ( 192 > n ) and ( n <= 256 ) )
{ 01100001011101100110010100100000011011010110
B i-1j [n] = Ci-1[n] 01010010000001100110. (5.5)
j= j+1
} 01110010011011110110110100100000011001010111
} 01100110100101101100. (5.6)

} Permute the bits in key ‘k’ by using the random key based
permutations published in our previous paper. See reference
END [6].

Let this permuted key be divided into four equal size blocks
and used as round keys kr11, kr12, kr13, kr14. for blocks B01,
B02, B03, B04.respectively.
(IJCNS) International Journal of Computer and Network Security, 81
Vol. 1, No. 2, November 2009

We are using 128 bit key k in each round, we divide k


Now we encrypt these four blocks with their respective into four blocks, perform required transformations and get
round keys and with the help of round function ‘F’ as the round sub keys kr11, kr12, kr13, kr14 for plaintext blocks
described in our previous paper published. See reference [6]. B01, B02, B03, B04 respectively.
The corresponding cipher blocks C11, C12, C13, C14. obtained
after encryption in first round are as follows. According to Brute force attack, if a round key has to be
guessed. We need an exhaustive search of key space
01100001011101100110010100100000011011010110
01010010000001100110. (5.7)
2128 ≈ (210)13 ≈ (103)13 ≈ 1039. (6.1.1)
01110010011011110110110100100000011001010111
01100110100101101100. (5.8)
Since it takes many years to test each and every key possible
01101000011001000000000101001001010101111111 within such huge key space, we say that brute force attack is
00011011111101111001 (5.9) not possible on our algorithm as we cannot afford so many
years in searching the exact key.
00100010011110001111101011001000001001111110
10011010100100100010 (5.10) 6. 2 Known plaintext Attack

Next, we need to interlace these four blocks and get a In this case, we have as many plain text – cipher text
block cipher C1. pairs as we require. In our present paper, it is worth noticing
the interlacing and decomposition concepts introduced
So that, enough confusion and nonlinearity is induced by which handle the known plaintext attack. Let us first
mixing the bits of these small block ciphers. understand how classical feistel cipher is prone to known
After applying interlacing, we get the plaintext attack and then will discuss how our modified
following block cipher as C1. feistel cipher tackles this problem.

00001110111101000010000001011000000011111111 According to classical feistel cipher network, the


10010101111011000100000111011101000101011100 problem is with a particular set of bits, which always
00011110000100111100000000110000000000100000 undergo into similar transformations in every successive
11101101001010001111001111110011111111110110 round. For example, the first six bits always go into the first
00011100010010100011010011110010011100100010 substitution box. Therefore, if we have enough plaintext
011100001110111100100110110010010010 (5.11) cipher text pairs, one can easily guess the values used in a
substitution box ignoring the other substitution boxes.
Similarly, by using the respective round and sub keys, Similarly, one will be able to guess the key bits also. This
we continue the process up to 16 rounds and we get the problem does not exist in our modified algorithm because;
following cipher. we are using four independent blocks of encryption in each
round. It is ensured that bits after a particular round will not
10011111110011001000010110011010110000010111 enter into the same substitution boxes, will not use the same
01011000100011110111001000111110111101000101 permutations and key. This is due to interlacing and
00010001001110000001001000100110110000001001 decomposition concepts, which allow the scattering of bits
01110100001000101100101010001111001001111100 into four different blocks. Thus, interlacing and
11110111000001001010000000101001101011011000 decomposition allow us to mix the bits properly and it helps
011111000010000011000110011011101110 (5.12) us in introducing high nonlinearity in the algorithm.

Since the process of decryption is same as the process of 6. 3 Avalanche Effect


encryption, we get the plaintext by following the similar
steps as illustrated above but with reverse permuted keys. Let the plaintext be “O Lord, Please save me from
evil”. By following the process of encryption, we get the
6. Cryptanalysis cipher.

Now, let us examine the brute force attack and the 10011111110011001000010110011010110000010111
known plaintext attack on our cipher to assess the strength 01011000100011110111001000111110111101000101
of the cipher. First, we show that the brute force attack is 00010001001110000001001000100110110000001001
formidable and the known plaintext attack leads to a system 01110100001000101100101010001111001001111100
of equations from which the unknown key cannot be 11110111000001001010000000101001101011011000
determined. 011111000010000011000110011011101110 (6.3.1)

6. 1 Brute Force Attack Now let the plaintext be fixed, but change the key by one
bit. This can be done by changing the number “155” to
“156” in key ‘K’, since 155 and 156 differ by one bit. Now
82 (IJCNS) International Journal of Computer and Network Security,
Vol. 1, No. 2, November 2009

by using this new key ‘k’ we encrypt the same plaintext and into 4 equal parts of 64 bit blocks so that, cipher bits
we obtain the corresponding cipher as obtained after each round scatter into different blocks in the
next round. By doing so, the cryptanalysis part becomes
00110010010101011010111001110010111111110110 more difficult as the final cipher text obtained will depend
01010011101110000101001000100010100001011101 on different substitution boxes and different transformations
01010101111010111100000111000001010001111010
00110101011110010110101101101010010101101010 References
01010101110010001011011100111100011001000100
100010011001010011010010101111010011 (6.3.2) [1] William Stallings, “ Cryptography and Network
Security: Principles & Practices”, Third edition, 2003,
Comparing (6.3.1) and (6.3.2), we notice that the two Chapter 2 and 3.
cipher blocks differ by 125 bits out of the total 256 bits. This [2] Feistel. H. “ Cryptography and Computer Privacy” ,
shows that the algorithm exhibits strong avalanche effect. Scientific American, Vol. 228, No. 5. pp 15 – 23,
1973.
In the second case, let the key ‘K’ be fixed, But [3] Feistel, H., Notz W. and Smith. J. “ Some
change the plaintext. So that, the new plaintext and the cryptographic Techniques for machine to machine
original one differ by exactly one bit. This can be data communications “, Proceedings of the IEEE,
accomplished by changing the first character of the plaintext Vol. 63, No. 11, pp 1545 – 1554, Nov 1975.
from ‘O’ to ‘P’, because, ASCII values of ‘O’ and ‘P’ differ [4] “Avalanche Characteristics of Substitutions –
by one. We get the cipher text from this new plaintext as permutation Encryption Networks” Tavares S. Heys
H. IEEE Transactions on Computers 44 (9): 1131 –
11011100010001000100000100011000001000000101 1139, 1995.
00100100001110111010101111000001101100100110 [5] Shakir M. Hussain and Naim M. Ajilouni, “Key based
11110010110010010000111001111001000111101000 random permutation”, “Journal of Computer Science
00010001010011100100100000111000101001000101 2(5): 419 – 421, 2006. ISSN 1549 -3636.
00101010011111010011010110100010010010001100 [6] K. Anup Kumar and S. Udaya Kumar, “Block cipher
001101101011001011100010001010101010 (6.3.3) using key based random permutations and key based
random substitutions”, “International Journal Of
On comparing (6.3.1) and (6.3.3), we notice that the two Computer Science and Network Security”, Seoul,
cipher blocks differ by 125 bits out of 256 bits. This shows, South Korea. ISSN: 738-7906. Vol. 08, No. 3, March
that the interlacing and decomposition introduced in our 2008. pp. 267-277.
encryption algorithm exhibits good avalanche affect.
Authors Profile
7. Computational Results and Conclusion
K. Anup Kumar is working as an Associate Professor in the
In this paper, we have developed a block cipher of 256 Department Computer Science and Engineering, Sreenidhi
bits. The plaintext is of 32 characters and each character is Institute of Science and Technology. He is pursuing his PhD in the
represented with its 8 bits binary equivalent. The key area of information security, Under the guidance of Prof. V.U.K.
contains 16 integers which converted into its 8 bits binary Sastry from Jawaharlal Nehru Technological University,
Hyderabad, India. He published two papers in international
equivalent. The algorithms used for encryption, decryption,
Journals. He is interested in the research areas like: cryptography,
decomposition, interlacing etc. are all written using C Steganograpy, and Parallel processing systems.
language.
Prof. V.U.K. Sastry is working as the Director school of
From the cryptanalysis presented, we found that, brute computer science and informatics and as Dean R & D CSE
force attack is not possible. There is enough confusion and Department in Sreenidhi Institute of Science and technology.
diffusion introduced in the encryption algorithm through the Hyderabad, India. He has successfully guided many PhD’s and his
concepts of interlacing and decomposition. This is proved by research interests are: information security, Image processing and
the avalanche effect that is shown in (6.3). By using Data warehousing - data mining. He is the reviewer of many
international journals.
interlacing and decomposition, a 256 bit block , is broken

Acknowledgement
The authors are very thankful to Prof. Depanwita Roy
Chaudhury, IIT Kharagpur, India, for giving necessary
suggestions and for her valuable inputs given while writing
this paper. The authors are very thankful to the management
of Sreenidhi Institute Of Science and Technology, for their
support and encouragement given during this research work.
(IJCNS) International Journal of Computer and Network Security, 83
Vol. 1, No. 2, November 2009

A Comprehensive Analysis of Voice Activity


Detection Algorithms for Robust Speech
Recognition System under Different Noisy
Environment
C.Ganesh Babu1 , Dr.P.T.Vanathi2 R.Ramachandran3, M.Senthil Rajaa4, R.Vengatesh5
1
Research Scholar (PSGCT,) Associate Professor / ECE, Bannari Amman Institute of Technology, Sathyamangalam,
India. E-mail :bits_babu@yahoo.co.in
2
Assistant Professor / ECE, PSGCT, Coimbatore, India . E-mail :pt_vani@yahoo.com
3,4,5
UG Schola,r Bannari Amman Institute of Technology-Sathyamangalam, India.

Abstract: Speech Signal Processing has not been used much in Automatic Speech Recognition (ASR) in a comprehensive
the field of electronics and computers due to the complexity and manner. The proposed method of Speech Recognition
variety of speech signals and sounds. However, with modern System for Robust noise environment is shown in the
processes, algorithms, and methods which can process speech figure.1
signals easily and also recognize the text. Demand for speech
INPUT NOISE
recognition technology is expected to rise dramatically over the SPEECH ESTIMATION VAD OUTPUT
next few years as people use their mobile phones as all purpose
lifestyle devices. In this paper, we implements a speech-to-text
system using isolated word recognition with a vocabulary of ten Figure1. Proposed Robust Speech Recognition System
words (digits 0 to 9) and statistical modeling (Hidden Markov
Model - HMM) for machine speech recognition. In the training 1.1 Speech Characteristics
phase, the uttered digits are recorded using 8-bit Pulse Code
Modulation (PCM) with a sampling rate of 8 KHz and saved as Speech signals are composed of sequence of sounds. Sounds
a wave file using sound recorder software. The system performs can be classified into three distinct classes according to their
speech analysis using the Linear Predictive Coding (LPC) mode of excitation.
method of degree. From the LPC coefficients, the weighted (i) Voiced sounds are produced by forcing air through
cepstral coefficients and cepstral time derivatives are derived. the glottis with the tension of the vocal cords
From these variables the feature vector for a frame is arrived. adjusted so that they vibrate in a relaxation
Then, the system performs Vector Quantization (VQ) utilizing a oscillation, thereby producing a quasi-periodic
vector codebook which result vectors form of the observation pulse of air which vibrates the vocal tract.
sequence. For a given word in the vocabulary, the system builds (ii) Fricative or Unvoiced sounds a regenerated by
an HMM model and trains the model during the training phase.
forming a constriction at some point in the vocal
The training steps, from Voice Activity Detection (VAD) are
tract and forcing air through .the constriction at a
performed using PC-based Matlab programs. Our current
framework uses a speech processing module including a high enough velocity to produce turbulence.
Subband Order Statistics Filter based Voice Activity Detection (iii) Plosive sounds result from making a complete
with Hidden Markov Model (HMM)-based classification and closure and abruptly releasing it.
noise language modeling to achieve effective noise knowledge
estimation. 1.2 Overview of Speech Recognition
Keywords: Hidden Markov Model, Vector Quantization,
A Speech Recognition System is often degraded in
Subband OSF based Voice Activity Detection.
performance when there is a mismatch between the acoustic
conditions of the training and application environments.
This mismatch may come from various sources, such as
1. INTRODUCTION
additive noise, channel distortion, different speaker
Currently, Speech Recognition Systems are pluged into characteristics and different speaking modes. Various
many technical barriers towards modern application. An robustness techniques have been proposed to reduce this
important drawback affect most of these application is mismatch and thus improve the recognition performance
harmful environmental noise and it also reduces the system [12]. In the last decades, many methods have been proposed
performance. Most of the noise compensation algorithm to enable ASR systems to compensate or adapt to mismatch
often requires the Voice Activity Detector (VAD) to due to inter speaker differences, articulation effects and
estimate the presence or absence of speech signal [1]. In this microphone characteristics [14].
paper, we compare the performance of the VAD algorithm
in presence of different types of noise like airport, babble, The paper is organized as follows. Section 2 reviews the
train, car, street, exhibition, restaurant and station for theoretical background of VAD algorithms. Section 2.1
84 (IJCNS) International Journal of Computer and Network Security,
Vol. 1, No. 2, November 2009

shows the principle of VAD algorithm. Section 3 explains distinguished to be adaptive to time-varying noise
the subband OSF based VAD implementation. Sections 4 environments with the following algorithm for updating the
express the nature of VAD using HMM. Results are noise spectrum during non-speech periods being used:
discussed in Section 5. The paper was concluded in Section
6. (4)

2. Voice Activity Detection Where Nk is the average spectrum magnitude over a K-


frame neighbourhood:

Voice is differentiated into speech or silence based on = (5)


speech characteristics. The signal is sliced into adjoining
frames. A real valued nonnegative parameter is associated
with each frame. If this parameter exceeds a certain 3. Subband OSF Based VAD
threshold, the signal is classified as speech or non speech.
An improved voice activity detection algorithm employing
The basic principle of VAD device is that it extracts some long-term signal processing and maximum spectral
measured features or quantities from the input signal and component tracking .It improves the speech/non-speech
then compares these values with thresholds. Voice activity discriminability and speech recognition performance in
(VAD=1) is declared if the measured value exceeds the noisy environments. Two issues are solved using VAD .The
threshold. Otherwise (VAD=0) is declared for no speech first one is performance of VAD in low noise condition (low
activity. In general, a VAD algorithm outputs a binary SNR) and the second is with noisy environment
decision in a frame by frame basis where a frame of an input (background) [1].
signal is a short unit of time such as 20-40mseconds.
NOISE
FFT VAD
REDUCTION
2.1 VAD Decision Rule

Once the input speech has been de-noised, its spectrum


magnitude Y (k, l) is processed by means of a (2N +1)-frame
window. Spectral changes around an N-frame neighborhood
of the actual frame are computed using the N-order Long-
WF FREQUENCY
SPECTRUM
Term Spectral Envelope (LTSE) as: SMOOTHING
DESIGN DOMAIN
FILTER
(1)

where l is the actual frame for which the VAD decision[12] NOISE
is made and k= 0, 1, ..., NFFT-1, is the spectral band. The UPDATE

noise suppression block have to perform the noise reduction


of the block Figure 2. Block Diagram of Subband Order Statistics Filter
based VAD
(2)
The subband based VAD uses two order statistics filters for
before the LTSE at the l-th frame can be computed. This is the Multi-Band Quantile (MBQ) SNR estimation [3]. The
carried out as follows. During the initialization, the noise implementation of both OSF is based on a sequence of 2N+1
suppression algorithm is applied to the first 2N + 1 frames log-energy values {E(m − N,k), . . . , E(m,k), . . . , E(m +
and, in each iteration, the (l+N +1)-th frame is de-noised, so N,k)} around the frame to be analyzed [14]. The block
that Y (k, l+N +1) become available for the next iteration. diagram of the subband based VAD is shown in the Figure
The VAD decision rule is formulated in terms of the 2. This algorithm operates on the subband log-energies.
Long-Term Spectral Divergence (LTSD)[1] calculated as Noise reduction is performed first and the VAD decision is
the deviation of the LTSE respect to the residual noise formulated on the de-noised signal. The noisy speech signal
spectrum N(k) and defined by: is decomposed into 25-mseconds frames with a 10-mseconds
window shift. Let X(m,l) be the spectrum magnitude for the
(3) mth band at frame l .The design of the noise reduction block
is based on Wiener Filter theory whereby the attenuation is a
function of the signal-to-noise ratio (SNR) of the input
If the LTSD is greater than an adaptive threshold γ, the
signal.
actual
The VAD decision is formulated in terms of the de-noised
frame is classified as speech, otherwise it is marked as non
signal, being the subband log-energies processed by means
speech. A hangover delays the speech to non-speech
of order statistics filters[2].
transition in order to prevent low-energy word endings
being misclassified as silences. On the other hand, if the
The noise reduction block consists of four stages.
LTSD achieves a given threshold LTSD0, the hangover
algorithm is turned off to improve non speech detection
accuracy in low noise environments. The VAD is
(IJCNS) International Journal of Computer and Network Security, 85
Vol. 1, No. 2, November 2009

i) Spectrum smoothing: The power spectrum is averaged study these two separate aspects of modeling a dynamic
over two consecutive frames and two adjacent spectral process (like speech) using one consistent framework.
bands. Another attractive feature of HMM's comes from the fact
that it is relatively easy and straightforward to train a model
ii) Noise estimation: The noise spectrum Ne(m,l) is updated from a given set of labeled training data (one or more
by means of a 1st order IIR filter on the smoothed spectrum sequences of observations).
Xs(m,l), As mentioned above the technique used to implement
speech recognition is Hidden Markov Model (HMM)
(6) [4][13].The HMM] is used to represent the utterance of the
word and to calculate the probability of that the model
where λ=0.99 and m=0,1,…,NFFT/2 which created the sequence of vectors. There are some
challenges in designing of HMM for the analysis or
iii) Wiener Filter design: First, the clean signal S(m,l) is recognition of speech signal. HMM broadly works on two
estimated by combining smoothing and spectral subtraction phases under which phase I is Linear Predictive Coding and
phase II consists of Vector Quantization, training, and
recognition phases.

(7) The present hidden Markov Model is represented by


where γ=0.98 . equation 12.
(12)
Then, the wiener filter H(m,l) is designed as
π = initial state distribution vector.
(8) A= State transition probability matrix.
B=continuous observation probability density function
Where (9) matrix.

Given appropriate values of A,B and π as mentioned by


and ηmin is selected so that the filter frequency response equation 12, the HMM can be used as a generator to give
yields a 20 dB maximum attenuation. S’(m,l) the spectrum an observation sequence
of the cleaned speech signal is assumed to be zero at the (13)
beginning of the process and is used for designing the
wiener filter through Equation 3 to Equation 5. It is given (Where each observation Ot is one of the symbols from the
by observation symbol V and T is the number of observation in
the sequence) as follows:
(10)
1) Choose an initial state q1=Si according to the initial state
The filter H(m,l) is smoothed in order to eliminate rapid distribution π.
changes between neighbor frequencies that may often cause 2) Set t=1
musical noise. Thus, the variance of the residual noise is 3) Choose according to the symbol probability
reduced and consequently, the robustness when detecting distribution in state Si .
nonspeech is enhanced. The smoothing is performed by 4) Transit to a new state according to the state
truncating the impulse response of the corresponding causal transition probability distribution for state Si.
FIR filter to 17 taps using a Hanning window one to this 5) Set ( return to step3) if ; otherwise
time domain operation. The frequency response of the terminate the procedure.
Wiener filter is smoothed and the performance of the VAD
is improved. The above procedure can be used as both a generator of
iv) Frequency domain filtering: The smoothed filter is observations, and as a model for how a given observation
applied in the frequency domain to obtain the denoised sequence was generated by an appropriate HMM.
spectrum After re estimate the parameters, the model is represented
with the following denotation
(11)
(14)

The model is saved to represent that specific observation


sequences, i.e. an isolated word. The basic theoretical
4. Hidden Markov Model strength of the HMM is that it combines modeling of
stationary stochastic processes (for the short-time spectra)
The basic theoretical strength of the HMM is that it and the temporal relationship among the processes (via a
combines modeling of stationary stochastic processes (for Markov chain) together in a well-defined probability space.
the short-time spectra) and the temporal relationship among This combination allows us to study these two separate
the processes (via a Markov chain) together in a well- aspects of modeling a dynamic process (like speech) using
defined probability space. This combination allows us to one consistent framework. Another attractive feature of
86 (IJCNS) International Journal of Computer and Network Security,
Vol. 1, No. 2, November 2009

HMM's comes from the fact that it is relatively easy and frames centered around the current vector[7][8] which is
straightforward to train a model from a given set of labeled denoted in the following equation
training data (one or more sequences of observations).
=[ (23)
4.1 Linear Predictive Coding Analysis
where G is the gain term to make the variance of ĉl(m) and
One way to obtain observation vectors O from speech ∆ĉl(m) equal.
samples is to perform a front end spectral analysis. The type (24)
of spectral analysis that is often used is linear predictive
coding[5]-[9]. (25)
The steps in the involved processing are as follows:
4.2 Vector Quantization Training and
i) Preemphasis: The digitized speech signal is processed by Recognition Phases
a first-order digital network in order to spectrally flatten the
signal which is discussed as To use HMM with discrete observation symbol density , a
Vector Quantizer (VQ) is required to map each continuous
(15) observation vector in to a discrete code book index. The
major issue in VQ is the design of an appropriate codebook
ii) Blocking Into Frames: Sections of NA consecutive speech for quantization. The procedure basically partitions the
samples are used as a single frame. Consecutive frames are training vector in to M disjoin sets. The distortion steadily
spaced MA samples apart. The frame separation is given by decreases as M increases. Hence HMM with codebook size
following equation of from M=32 to 256 vectors has been used in speech
recognition experiments using HMMs [9-10]
(16)
During the training phase the system trains the HMM for
iii) Frame Windowing: Each frame multiplied by an NA each digit in the vocabulary. The same weighted cepstrum
sample window(Hamming Window) so as to minimize matrices for various samples and digits are compared with
the adverse effects of chopping an NA samples section out of the code book and their corresponding nearest codebook
the running speech signal .The windowing technique is vector indices is sent to the Baum-Welch algorithm to train
expressed as a model for the input index sequence. After training, three
(17) models for each digit that corresponds to the three samples
in our vocabulary set. Then one obtained average of A,B and
iv) Auto Correlation Analysis: Each windowed set of speech π matrices over the samples are calculate to generalize the
sample is autocorrelated to give a set of (p+1) coefficients, models[11].
where p is order of the desired LPC analysis . The
autocorrelation process is given by The input speech sample is preprocessed to extract the
feature vector. Then, the nearest codebook vector index for
= l(n) l , (18) each frame is sent to the digit models. The system chooses
the model that has the maximum probability of a match.
v) LPC/Cepstral Analysis: A Vector of LPC coefficients is
computed from the autocorrelation vector using a Levinson
or a Durbin recursion method. An LPC derived cepstral
5. Results and Discussion
vector is then computed up to the Qth component. The
cepstral analysis is given by Several experiments are conducted commonly to evaluate
VAD algorithm .The analysis mainly focused on error
(19) Probabilities. The proposed VAD was evaluated in terms of
ability to discriminate speech signal from non –speech at
different SNR values .The results are shown in table 1-10
vi) Cepstral Weighting: The Q-coefficient cepstral vector
ct(m) at time frame l is weighted by a window Wc(m)[5][6] Table 1: Performance of VAD for digit ‘0’for various noise
which is discussed as sources

NOISES 0d 5d 10d 15d


20) B B B B
(21) AIRPORT 83 11 18 75
EXHIBITION 0 10 89 9
To find TRAIN 0 0 0 0
(22) RESTAURANT 8 50 2 9
STREET 94 86 75 95
vii) Delta Cepstrum: The time derivative of the sequence of
B ABBLE 0 30 6 28
weighted cepstral vectors is approximated by a first-order
STATION 10 27 3 31
orthogonal polynomial over a finite length window of
CAR 4 6 1 12
(IJCNS) International Journal of Computer and Network Security, 87
Vol. 1, No. 2, November 2009

Table 2: Performance of VAD for digit ‘1’for various noise Table 6: Performance of VAD for digit ‘5’for various noise
sources sources

NOISES 0d 5d 10d 15d NOISES 0dB 5dB 10dB 15dB


B B B B AIRPORT 20 32 43 31
AIRPORT 18 22 39 37 EXHIBITION 34 23 16 50
EXHIBITION 57 60 47 64 TRAIN 28 27 21 30
TRAIN 11 52 34 57 RESTAURANT 13 13 31 30
RESTAURANT 23 35 51 51 16 23 36 39
STREET
STREET 26 41 49 49
B ABBLE 14 26 37 23
B ABBLE 21 46 49 35
STATION 24 14 14 19
STATION 54 44 51 50
25 37 47 32 CAR 30 27 31 31
CAR

Table 3: Performance of VAD for digit ‘2’for various noise Table 7: Performance of VAD for digit ‘6’for various
sources noise sources
NOISES 0d 5d 10d 15d
NOISES 0d 5d 10d 15d
B B B B
B B B B
AIRPORT 62 18 61 38
AIRPORT 16 68 66 86
EXHIBITION 55 37 48 54
EXHIBITION 51 59 83 88
TRAIN 44 37 53 56
40 16 44 36 TRAIN 14 9 24 19
RESTAURANT
STREET 60 44 58 33 RESTAURANT 15 21 14 7
B ABBLE 31 49 55 19 STREET 1 1 1 3
STATION 37 36 23 40 B ABBLE 32 74 25 96
CAR 57 58 76 53 STATION 1 1 1 1
CAR 21 38 27 4

Table 4: Performance of VAD for digit ‘3’for various noise


Table 8: Performance of VAD for digit ‘7’for various noise
sources
sources
NOISES 0dB 5dB 10dB 15dB
AIRPORT 37 45 48 37 NOISES 0d 5d 10d 15d
EXHIBITION 54 19 41 30 B B B B
14 26 51 37 AIRPORT 27 40 49 44
TRAIN
EXHIBITION 31 38 84 14
RESTAURANT 27 28 18 43
TRAIN 35 46 47 50
STREET 48 27 36 0
RESTAURANT 38 43 36 39
B ABBLE 12 24 16 23
STREET 40 43 49 44
STATION 35 38 30 24 28 42 53 36
B ABBLE
CAR 37 23 40 41 STATION 33 40 38 36
CAR 53 57 40 41
Table 5: Performance of VAD for digit ‘4’for various noise
Table 9: Performance of VAD for digit ‘8’for various noise
sources
sources
NOISES 0dB 5dB 10dB 15dB
AIRPORT 64 79 77 59 NOISES 0dB 5dB 10dB 15dB
EXHIBITION 52 64 48 66 AIRPORT 48 53 57 44
TRAIN 46 75 83 74 EXHIBITION 60 22 56 19
RESTAURANT 37 43 56 64 TRAIN 18 54 58 61
STREET 84 69 56 82 RESTAURANT 45 59 58 59
B ABBLE 84 73 57 78 STREET 49 44 53 45
STATION 77 77 80 63 B ABBLE 28 58 52 44
CAR 73 49 71 67 STATION 59 48 30 51
CAR 40 54 59 51
88 (IJCNS) International Journal of Computer and Network Security,
Vol. 1, No. 2, November 2009

Table 10: Performance of VAD for digit ‘9’for various [3] Javier Ramírez, José C. Segura, Senior Member,
noise sources IEEE, Carmen Benítez, Ángel de la Torre and
Antonio Rubio,” An Effective Subband OSF- Based
NOISES 0dB 5dB 10dB 15dB VAD With Noise Reduction for Robust Speech
AIRPORT 17 16 15 18 Recognition”. IEEE Transactions on Speech And
EXHIBITION 7 12 18 10 Audio Processing, Vol. 13, pp.1119-1129, November
TRAIN 4 23 12 21 2005.
RESTAURANT 14 16 18 16 [4] Lawrence R. Rabiner,”A tutorial on Hidden Markov
Model and selected applications in speech
STREET 13 6 6 15
recognition”,proceedings of the IEEE, vol.77,
B ABBLE 4 11 13 15
no.2,February 1989
STATION 13 1 10 11 [5] J. Makhoul,”Linear Prediction a Tutorial view,”
CAR 12 9 15 19 Proceedings of the IEEE, April 1975.
[6] J.D.Markel and A.H.Gray Jr., “Linear Prediction of
Speech”. Newyork, NY:springer-Verilag,1976.
6. Conclusion [7] Y.Tokhura,”Aweighted cepstraldistance measure for
speechrecognition,”IEEE Trans.Acoust.speech signal
The experimental results are shown in table 1-10 inferred processing,vol.ASSP-35,no.10.pp.1414-1422, October
that the VAD algorithm produces a better result for certain 1987.
noises. The recognition system using VAD gives robustness [8] B.H.Juang,L.R.Rabiner and J.G.Wilpon,”On the Use
than any another algorithm. For digits ‘0’ and ‘9’ VAD of Bandpass filtering in speech recognition”,
provides better result for airport and street noises. For digits IEEETrans.Acoust.Speech signal processing,
‘1’ and ‘8’ it gives better performance over exhibition and vol.ASSP-35, no.7, pp947-954, July 1987.
station. For digit ‘2’ the better recognition occurs for [9] J.Makhoul,S.Roucos and H.Gish,”Vector
airport, street and car noises. For digit ‘3’ the recognition is Quantization In Speech Coding” .Proc.IEEE.vol.73,
good for street and car. For digit ‘4’ the VAD performs no.11, pp.1551- 1558 , November 1985.
good recognition for street and babble noises. For digit ‘5’ it [10] L.R.Rabiner,S.E.Levinson and M.M.Sondhi,”On
works well for exhibition and car noises. For digit ‘6’ the The Application Of Vector Quantization And
recognition works well in airport and exhibition Hidden Morkov Models To Speaker-
environment. For digit ‘7’ the performance of VAD is better Independent Isolated Word Recognition”,Bell
for exhibition and babble noises. Thus VAD works well for Syst.Tech.J., vol.62, no.4, pp.1075-1105,April 1983.
utterances in different digits and extracts speech signal at [11] M.T.Balamuragan and M.Balaji, ”SOPC- Based
different noisy environment conditions. Further research is Speech toText Conversion Embedded processor
in the direction of Genetic Algorithm for Robust Speech design contest-outs standing design”, pp83-108,
Recognition in a noisy environment. 2006.
[12] Alan Davis, Sven Nordholm, Roberto Togneri.
”Statistical Voice Activity Detection Using Low-
Acknowledgement Variance Spectrum Estimation and an Adaptive
Threshold” IEEE Transaction On Audio,Speech and
Firstly, the authors would like his thanks to the Supervisor, Language Processing. VOL.14,NO.2,March2006.
Dr. P.T.Vanathi, Professor, Department of Electronics and [13] Kaisheng Yao, Kuldip K. Paliwal,and Te-Won Lee,
Communication Engineering, PSG College of Technology, “Generative factor analyzed HMM for automatic
Coimbatore, India. The author would like to express his speech recognition” Speech Communication vol.45
thank to the Management and Principal of Bannari Amman pp. 435–454 , January 2005.
Institute of Technology, Sathyamangalam, India. The author [14] Kentaro Ishizuka, and Tomohiro Nakatani, “A feature
greatly expresses his thanks to all persons whom will extraction method using subband based periodicity
concern to support in preparing this paper. and aperiodicity decomposition with noise robust
frontend processing for automatic speech recognition”
Speech Communication vol.48, pp.1447–1457, July
References 2006.

[1] Ramirez, J.C.Segura, C.Benitez, A.de la Torre,


A.Rubio, Voice activity detection with noise
reduction and long-term spectra divergence
estimation” IEEE International Conference on
Acoustics, Speech, and Signal Processing, Volume
2, Issue , pp 1093-6, 17-21, May 2004.
[2] Sundarrajan Rangachari, Philipos C. Loizou,” A noise-
estimation algorithm for highly non-stationary
environments” . Speech Communication 48 (2006)
pp.220–231, August 2005.
(IJCNS) International Journal of Computer and Network Security, 89
Vol. 1, No. 2, November 2009

An Efficient Adaptation Aware Caching


Architecture for Multimedia Content Proxies
B.L. Velammal1 and P. Anandha Kumar2
1
Lecturer, Department of CSE, Anna University,
Chennai – 25, India
velammalbl@cs.annauniv.edu
2
Assistant Professor, Department of IT, MIT Campus, Anna University,
Chennai – 44, India
anandh@annauniv.edu

Abstract: One of the challenges in the development of proxy- number of transcoding operations required, minimizing the
based content adaptation architecture is the implementation of need for computational power but increasing pressure on
proxy cache which aims at reducing the amount of transcoding storage requirements. The trade off is between available
operations and server traffic. In the absence of adaptation- storage and computational power. Also, it is to be noted in
aware caching at content proxies, the system performance this case that the entire client load is on the server. This in
suffers due to repeated and frequent access of same image tandem with a large number of transformations could
versions on the remote content server. An efficient mechanism
possibly bring down response time below accepted standards
for caching the most relevant images and their variations can
considerably reduce the above overheads. The efficiency of a
and at the worse, result in system failure.
content proxy can be improved by caching of image versions
that are frequently accessed by client devices. Consequently, a Proxy-based approach introduces a new layer between the
considerable amount of transcoding-related computational load client and the server, the content proxy, effectively forming
on content servers is reduced. This paper addresses these cache a three tier architecture. The content proxy receives the
management issues by proposing an adaptation aware requests from the client and responds to it from the data
architecture for content adaptation proxies that focuses its cache obtained from the server. The need for a cache is implicit
removal policy on transcoding cost. The proposed architecture here. The proxy can store the variations of objects to
improves efficiency when compared to conventional cache respond to the client requests. It is easy to see that by
management schemes, by resulting in lesser traffic between the associating a server to more than one client (a one to many
proxy and remote server and lowering the number of
mapping) the load on the server is reduced. Also there is
transcoding operations.
effective separation of computation and data storage sectors
by exporting computational requirements to the proxy. Also
the effective computational load is reduced as it is
Keywords: Content adaptation architectures, cache
management, multimedia. distributed, hence lowering individual system costs as well
as increasing system reliability while paving way for
1. Introduction graceful system degradation via load balancing. It can be
seen that the advantages of a content proxy are widespread
Adaptation of multimedia content can be done at three and myriad. This approach is indeed widely used and
possible locations: at the client device, at the content server recommended.
or at a specialized multimedia proxy between the two. Client
side adaptation requires the adaptation operations to be Proxies can exhibit passive caching as well as active
performed on the client devices receiving these objects. caching. In passive caching, if the object is not cache it is
Though this is not unknown, in current scenario involving retrieved from the server and older objects are removed in
widespread use of handhelds, the computational power the event of cache overflow. Active caching aims at
required for these operations is hard to find. Even is present, improving retrieval performance by increasing the
it takes up a larger time, decreasing efficiency than what likelihood that a requested object will be found in the cache.
could be obtained by the other schemes. In client-side It can also work as a superset of passive caching based on
architectures, cache precedence could increase efficiency the load on the proxy. In content adaptation proxies, the
considerably, but this is a seldom used practice as memory is choice of using active caching should be based on an
again a scarce resource. analysis of frequently adapted object versions. The
architecture proposed in this paper aims at implementing
In case of server-side adaptation techniques, conventional active caching and thereby reduce repeated adaptation
web servers are imbued with content adaptation processes. operations and traffic between content server and content
Caching of the adapted content here involves storing proxies.
multiple variations of the same content in order to match
with client requirement specification. This reduces the
2. Background and Related Work

Over the past decade, a number of proxy-based architectures


90 (IJCNS) International Journal of Computer and Network Security,
Vol. 1, No. 2, November 2009

for content adaptation have been proposed. A proxy-based formulated for cache management. However, their node
framework for dynamic adaptation in distributed multimedia structure and tree construction algorithms do not take the
applications [1] describes the functions of a proxy that can cache replacement policy into account. An efficient cache
be configured to perform adaptations of multimedia. indexing structure on Content Proxies will have to take into
Another adaptation architecture called MARCH [2] enables account the transcoding costs between image variations as a
the creation of a service-based overlay network. It provides part of determining the relevancy of any image object
for a dynamic proxy execution environment and mobile resident on the cache.
aware servers to decide which set of tasks have to be
assigned to each proxy. Related research has also been Our work focuses on combining the cache replacement
carried out in Content Distribution Networks (CDNs) [3] policy of the content proxy with an adaptation-aware cache
that use presents topology aware overlays for efficient indexing scheme in order to arrive at a more holistic method
distributed multimedia application support. Self-adaptive of cache management for content adaptation proxies.
CDNs [4] exhibit ability to adapt their behavior in order to
cope with serving heterogeneous multimedia content under
unpredictable network conditions. In addition to it, 2.1. Content Adaptation Architecture
distributed content adaptation framework (DCAF) [5] has
been proposed for pervasive computing. A similar proposal The architecture which we have referred to is one proposed
on Automatic Adaptation of Streaming Multimedia Content by Jean-Marc Pierson [5] (see Fig.1). The architecture takes
[6] makes use of a server side adaptation engine. The engine into consideration client profile, network conditions, content
reacts to context changes and also facilitates multimedia profile (meta-data) and available adaptation services (third
adaptation in a distributed fashion along the delivery path. party software elements) to arrive at an optimal dynamic
adaptation decision.
The cache forms the backbone of the content proxy, and
hence an architecture that makes possible its efficient and
effective utilization is required to exploit maximum benefits.
Improved cache management ensures that the images that
are accessed frequently by clients are stored in the cache.
This decreases the number of adaptation operations involved
in transcoding, thereby increasing average response time.
This also increases the cache hit ratio (i.e., the probability
that an requested object is found in the cache). In addition,
the number of times the server is requested for an image is
reduced reducing costly IO time and easing the load on the
network.

An important component in the design of any caching


mechanism is the cache replacement policy. One of the
existing cache replacement policies for transcoding proxies
utilizes Aggregate Effect [7] for determining the image to be
removed from cache. The transcoding graph is constructed
based on parameters such as size of the object version, Figure 1. Content Adaptation Architecture
reference rate to each version and the delay in fetching the
image version. However, it does not take the cache size 2.2Local Proxies:
restrictions and the optimal number of versions that should
be kept on the cache and only provides a relationship They intercept user request and server responses and initiate
determination for the existing image versions. the transfer of adapted content

Another caching scheme called as PTC [8] describes the 2.3. Content Proxies:
working of proxies that transcode and cache for
heterogeneous devices requirements. A similar graph-based They accept user requests forwarded by the local proxies and
data structure has been used that utilizes the earlier retrieve the images from either the local cache or remote
proposed Aggregate Effect along with network parameters content servers. Adaptation is performed in the adaptation
and learning capabilities. This scheme again does not take engine if the appropriate image version is not present and
the cache restrictions and indexing scheme into account. then transferred back to the local proxies ot be delivered to
the client.
In addition to the replacement policy, another important
factor in the design of caching is the indexing structure to be 2.4. ASPs:
used. More recent research work in cache indexing has been
focused on making the tree structure more cache conscious Adaptation Service Proxies are the web services that can be
i.e. perform faster lookup with minimum required memory deployed on the content proxies to execute required
space. A number of such tree-based indexing schemes like adaptation operations.
CSB-tree [9], CST-tree [10], CSR-tree [11] have been
(IJCNS) International Journal of Computer and Network Security, 91
Vol. 1, No. 2, November 2009

2.5. Profiles: 3. Content Proxy Architecture


The architecture provides for – i. Device profiles for storing A content proxy that is supplied with a cache can be divided
device capability and compatibility information. ii. Client into distinct modules based on which functionalities they
profiles for storing user preferences and settings. iii. serve. The various functionalities of the content proxy
Adaptation Service Registry for storing service profiles and includes: deserialisation of the incoming client request,
location. management of the cache and its indexing, adaptation of
images to fulfill the requests and IO with the server for
As explained before, the presence of cache at any point in retrieving required images.
this architecture increases productivity of the system.
Though caches make sense both at local and content In line with these functional requirements, our
proxies, maximum performance gain can be attained when architecture has been divided into four distinct modules, the
the cache is used at the independent content proxies that are layout of which is given in the architecture diagram (see
deployed, as it provides all the aforementioned gains. A figure 2).
second driving force behind this decision is that local
proxies are frequently absent in the current scenario of
mobile computing.

In the next section, we introduce our content proxy


architecture that incorporates an adaptation-aware cache
management system.

Figure 2. Content Proxy Architecture

Instead it could be merged with the query processor as a


single module, and this combination is referred to here as
the Query processor.
3.1. Query Processor:
The function of the query processor is deserialisation of
The communication forms the interface between the client
incoming client requests in order to extract the user
and the content proxy, and is depicted outside the proxy’s
parameters for requested objects. For example in an image
boundaries to indicate that it is strictly not a proxy
server this might include the pictures identity, resolution,
component.
format and other such characteristics. Additionally this can
include device specific data such as supported data types
92 (IJCNS) International Journal of Computer and Network Security,
Vol. 1, No. 2, November 2009

and processing constraints. Optionally they can be got from vii. If cache miss had occurred during recent
the device profile data structure. image request, the image retrieved from server
is added to the cache.
viii. Cache overflow is avoided by removing the
least preferable image using replacement
3.2. Adaptation Engine: policy.

This module is in charge of carrying out all the required the 4. Adaptation Aware Caching Scheme
adaptation operations. Thus happens in case of a cache miss
or of a partial cache hit. In both cases, the proxy has to
carry out adaptation operations to satisfy client requests. Cache efficiency is characterized by –
• Amount of “less important” cached images
Adaptation operation functions that are existing can • Size of allotted memory
again be used in our architecture, the general framework • Time required for the transmission of data from
presented here is independent of how adaptation is the remote server to the content proxy
implemented.
As the indexing scheme being proposed in this paper
3.3. Cache Management module: focuses on being adaptation-aware, the study of third factor
above is beyond the scope of this paper. For content
This includes both the indexing mechanism for the cache as adaptation proxies, similar images should be indexed closer.
well as the implementation of the cache removal policy. The Moreover, index should have caching policy information in
indexing mechanism is required for fast and orderly access order to achieve faster image replacement in case of a cache
of the cache. The replacement policy is decisive in the cache miss. And as per convention, the removal policy in the
architecture and determinant of the cache utilization and index should favor more frequently adapted objects which
efficiency. can lead to a reduction in computational overheads.
Consequently, the response time can be bettered.
3.4. Working of the Content Proxy
A cache management algorithm can be analogized to the
The Query Processor (QP) and Adaptation Engine (AE) classic Knapsack problem [12]. Consider D as the dataset
perform the decision making regarding the adaptation (more specifically image-set in our case).
operations that need to be performed on appropriate image
objects. On the other hand, the Cache Management module D = {D1, D2, …, DN}
(CM) maintains cache index and manages replacement xi = {0,1} where xi = 0, if Di is cached
operations based on the input parameters obtained from the xi = 1, if Di is not cached
previous two modules. The overall working of the proxy can wi and si are said to be the “relevancy value” and size of
be described as follows – Di respectively.
i. Communication Interface gets the client
request containing identifiers for client, image The aim of the caching mechanism is to –
N
and the device type. a. Maximize (Cache volume) V = Σ i = 1 xi wi
ii. QP deserializes the query and forwards the b. Ensure Σ i = 1 N xi si <= S, where S is the total cache size
parameters to Adaptation Engine.
iii. AE fetches the device profile based on device It is clear from the above expressions that the “relevancy
type specified in the query and decides on the value” w plays a pivotal role in the overall design and
adaptation operations. implementation of the cache. Keeping in mind the
iv. AE determines the expected optimal application of caching on content adaptation proxies, we
adaptation image parameters and sends it to define the relevancy value of j the version of image Di as –
QP.
v. QP forwards it to CM. wi,j = F (si,j, di,j, fi,j, ti,j, ni )
vi. CM checks if the image version with the
required parameters is present in the cache. where,
a. If image is present, it is fetched and sent si,j is the size of image version
to QP and then to the client via the ri,j is the image version resolution
Communication Interface. di,j is distance of image version from “nearest similar”
b. If not present, the CM tries to fetch a image
“similar transcodable” image from the fi,j is frequency of access of image version
cache which is then adapted to the ti,j is last access time of image version
appropriate format in AE and then sent to ni is the number of cached image versions of Di
the client. From the above notations, it is obvious that a crucial
c. Otherwise, CM sends cache miss message parameter in determining relevancy value wi,j in content
to QP. QP then instructs AE to fetch the proxy cache is the distance of image version from “nearest
image from remote server for adaptation. similar” image. We define a nearest similar image as
(IJCNS) International Journal of Computer and Network Security, 93
Vol. 1, No. 2, November 2009

another version of the same image that has higher 5.2. Partial Cache Hit
resolution and from which the target image can be obtained
through local adaptation. Arriving at a suitable value for
this parameter can result in a reduction in the number of
repeated requests to the remote server for the same image
version. Some of the other terms that are used in the paper
to describe the caching mechanism are defined below:

Cache Hit: When an exact match for the clients request is


found in the cache, it is termed a cache hit. This depends in
the availability of frequently accesses cache images in the
cache, a direct consequence of the replacement policy.

Cache Miss: The requested image is not found in the cache;


hence it should be accessed from the server. This involves
IO time, hence an increased response time.
Figure 4. Partial Cache Hit
Partial cache hit: The requested image is not found, but an
image which could be an acceptable replacement, or one
from which the object could be got by adaptation is present. • The request from the client arrives at the Query
Here the server need not be accessed and in the absence of processor
IO time the access time is considerably lower. Hence the • The Query processor deserializes the request and
presence of partial cache hits increases performance. This is passes parameters to the indexing component of
unique to our architecture and the adaptation scenario. cache management
• The indexing component of the cache management
Threshold: This is used to define when a partial cache hit system searches for the pertaining image
occurs, i.e. the limits of acceptability of cache images and • Image not found, but another version from which it
when cache images can be used for adaptation. This is not could be transcoded from is available in the cache.
fixed by us as this threshold could depend on the processing The existence of such an image is decided using a
capability of the deployment system, the promised QoS and threshold value.
other such factors that depend on individual instances. • The candidate image is sent to the adaptation
Hence we leave this as a variable parameter. engine along with the deserialised parameters
• The image is adapted and transferred to the
communication module to be sent to the client
5. CACHE ACCESS SCENARIOS
5.1. Cache Hit 5.3. Cache Miss

Figure 3. Cache Hit

• The request from the client arrives at the Query


processor Figure 5. Cache Miss
• The Query processor deserializes the request and
passes parameters to the indexing component of • The request from the client arrives at the Query
cache management processor
• The indexing component of the cache management • The Query processor deserializes the request and
system searches for the pertaining image passes parameters to the indexing component of
• Image Found, the result is transferred to the query cache management
processor that is transferred to the communication • The indexing component of the cache management
interface from where it reaches the client system searches for the pertaining image
• No image that falls within the threshold limits is
found in the cache
• The server is requested for the corresponding
image
94 (IJCNS) International Journal of Computer and Network Security,
Vol. 1, No. 2, November 2009

a) If server responds with an exact match It is [7] Cheng-Yue Chang; Ming-Syan Chen, “Exploring
transferred to the client via the communication aggregate effect with weighted transcoding graphs for
module efficient cache replacement in transcoding proxies”,
b) In case the responded image is not an exact Proceedings. 18th International Conference, 2002.
match, the received image id forwarded to the [8] A. Singh, A. Trivedi, K. Ramamritham, P. Shenoy,
adaptation engine for transformation “PTC: proxies that transcode and cache in
according to client requests then sent to the heterogeneous web client environments”, proceedings
communication interface. of the Third International Conference on Web
• In the latter case (b) the cache must again be Information Systems Engineering, 2002.
updated with the new adapted image. This is [9] J. Rao and K.A. Ross, "Making B+-tree cache conscious
decided using further parameters based on whether in main memory", In SIGMOD, pages 475-486, 2000.
an image or its variation is more frequently [10] Ig-hoon Lee, Junho Shim, Sang-goo Leeand Jonghoon
accessed. Chun, "CST-Trees: Cache Sensitive T-Trees",
Advances in Databases: Concepts, Systems and
6. Conclusion & Future Work Applications, 2007.
[11] Dong, 1 Yu, "CSR+-tree: Cache-conscious Indexing
The proposed architecture for content proxies in adaptation for High-dimensional Similarity Search,”, 19th
networks incorporates adaptation-aware cache management International Conference on Scientific and Statistical
features. These features lead to an improvement in the Database Management, Statistical and Scientific
performance of content proxies by reducing the number of Database Management (SSDBM), pp. 14-27, 2007.
requests to remote content server and reducing [12] Asanobu Kitamoto, "Multiresolution Cache
computations for adaptation operation by caching the most Management for Distributed Satellite Image Database
frequently adapted image versions. Our future work focuses Using NACSIS-Thai International Link", Proceedings
on the development an efficient indexing data structure that of the 6th International Workshop on Academic
in addition incorporates removal policy. Intertwining of Information Networks and Systems (WAINS), pp. 243-
these two makes the cache efficient as there is lesser 250, 2000.
overhead. The data structure is also made adaptation aware,
in contrast to existing indexing and replacement policy
making it an apt fit for content proxies.

References
[1] O. Layaida, D. Hagimonte, "Dynamic Adaptation in
Distributed Multimedia Applications", INRIA,
Technical Report, August 2002.
[2] S. Ardon, P. Gunningberg, B. Landfelt, Y. Ismailov, M.
Portmann, A. Seneviratne, "MARCH: A distributed
content adaptation architecture", International Journal
of Communication Systems 2003, 16.
[3] Khalil El-Khatib, Gregor v. Bochmann, and
Abdulmotaleb El Saddik, "A Distributed Content
Adaptation Framework for Content Distribution
Networks", School of Information Technology &
Engineering, University of Ottawa
[4] Jawaheer, G.; McCann, J.;"Building a self-adaptive
content distribution network", Proceedings. 15th
International Workshop on Database and Expert
Systems Applications, 2004. Volume , Issue , 30 Aug.-
3 Sept. 2004
[5] G. Berhe, L. Brunie, JM. Pierson. "Distributed Content
Adaptation for Pervasive Systems". In proceedings of
IEEE International Conference on Information
Technology, ITCC 2005, April 4-6, 2005, Las Vegas,
Nevada, USA, Vol.2 pp.234-241.
[6] Hutter, A.; Amon, P.; Panis, G.; Delfosse, E.; Ransburg,
M.; Hellwagner, H. "Automatic adaptation of streaming
multimedia content in a dynamic and distributed
environment", IEEE International Conference on
Image Processing, 2005. ICIP 2005. Volume 3, Issue,
11-14 Sept. 2005.
(IJCNS) International Journal of Computer and Network Security, 95
Vol. 1, No. 2, November 2009

Rough Set and BP Neural Network Optimized by


GA Based Anomaly Detection
REN Xun-yi1, WANG Ru-chuan2 , and ZHOU He-Jun3
1
Nanjing University of Posts and Telecommunications , College of Computer,
xinmofan Road, Nanjing 66,China
renxy@njupt.edu.cn
2
Nanjing University of Posts and Telecommunications , College of Computer,
xinmofan Road, Nanjing 66,China
wangrc@njupt.edu.cn
3
Nanjing University of Posts and Telecommunications , College of Computer,
xinmofan Road, Nanjing 66,China
zhouhj@njupt.edu.cn

detection model by neural network [5]. Now, a lot of


Abstract: To improve the speed and accuracy of detection, this
paper proposes anomaly detection methods based on rough sets literature study anomaly detection based on BP Neural
and GA Optimizing BP Neural Network. Using rough set to Network[6] because BP itself adopt gradient descent method
reduce 41 features of Kddcup'99 data sets to 14 features and to computing error and have many advantages such as
using GA global searching capability to optimize the BP neural parallel processing, nonlinear mapping, adaptability and
network weights Experiments based on Kddcup'99 intrusion non-parametric pattern recognition. However, BP neural
data demonstrate that use of rough set and BP optimized by GA network still has some applicable problems including
for intrusion data reduction not only have higher detection difficulty to determine the number of neurons, low efficiency
accuracy rate and has greatly improved network generalization to adjust the network weights, local minimum points, and so
ability. on.
Keywords: Anomaly detection; Back propagation neural BP neural network directly for the anomaly detection
network; Genetic Algorithm; Rough set . has the above shortcomings. To improve the speed and
accuracy of detection, this paper proposed anomaly detection
1. Introduction based on rough sets and BP neural network using GA
optimization. We firstly use rough set to reduce 41 features
An intrusion detection system (IDS) inspects all inbound of Kddcup'99 data sets to 14, simplify BP network structure,
and outbound network activity and identifies suspicious and improve BP neural network convergence and establish
patterns that may indicate a network or system attack from rate of anomaly detection model. Secondly we use GA
someone attempting to break into or compromise a system global search capability to optimize the BP neural network
[1]. According to different detection technology, IDS is weights to reduce the optimal time to find the right value,
divided into two categories: misuse detection and anomaly and improve the generalization ability of the model to more
detection. Misuse detection technology defines the attack accurately detect anomalies. The results of experiments
features of packets and looks for a specific attack whether its based on Kddcup'99 intrusion data show that the proposed
features have already been in the collected data. Anomaly method can improve performance of anomaly detection with
detection defines models of normal state for the network or BP neural network.
host and compares the current state of network to the models
and look for anomalies. The core of anomaly detection is 2. Kddcup'99 data reduction based on Rough
how to define the so-called "normal" model. Misuse Set
detection is based on the known defects or intrusion features
and can accurately detect some attacks with known 2.1 Rough Set
characteristics, but unable detect unknown attacks. Anomaly Rough Set [7] is new data mining method for study data
detection can detect unknown attacks, which is currently a integrity, knowledge uncertainty proposed by Z. Pawlak In
hot research. 1982. The basic idea of rough set is discovering decision
Neural Networks are capable of learning on historical rules based on dependent relationship between sample
data (or called training) to get decision-making model for attributes and decision-making attributes of information
anomaly detection. K. FOX firstly applied neural networks systems. According to attribute impact on the decision rules,
to intrusion detection with SOM algorithm[2]. David Endler the importance of attributes is determined, unimportant
adopted neural network for training data audit and got attributes eliminate, and certain classifiable capacity
decision-making model for host-based intrusion detection maintains but characteristics of the data reduce.
[3]. For the specific procedures, Anup K. Ghosh used of the For an information system I=<S, A, V, F>, S is non-
input data with a labeling to train and study of normal and empty sample set, A is the attribute set, V is the attribute
abnormal behavior[4]. C.jirapummin established mixed value domain, and F is mapping: Each attribute A of S
96 (IJCNS) International Journal of Computer and Network Security,
Vol. 1, No. 2, November 2009

samples given specific values in the V. the aim of reduction 1", the 42 attribute as decision-making attribute of rough
is searching B ⊆ A , Attribute B classification to the U is
set. Using Rosetta tools [9] to reduce experimental data, 41
features were reducing in table 1.
completely same as attribute A classification to the U.
Because the training samples have some classification
labels, for example samples of Kddcup’99 intrusion data set Table 1: TCP Data Reduction
are 42 dimensions, and its 42nd dimension is the "normal"
or "abnormal", also called decision attribute. So decision
NO Reduction Feature
table is defined as DT=<S, A&D, V, F>, in which A is
known as condition attributes, D is the decision-making 1 3,4,6,24,23,24,27,28,31,32,33,36,38
attributes.
For reduction of Rough Set, we introduce an B- 2 3,4,6,18,23,24, 27,28,31,32,33,36,38
indiscernibility relation:
3 3,4,6,14,23,24, 27,28,31,32,33,36,39
INDI ( B ) = {( s, s ') ∈ S 2 | ∀a ∈ B, F ( s ) = F ( s ') = a}
, which means two samples ( s and s’ ) can not be 4 3,4,6,23,24, 27,28,31,32,33,35,36,39
discernible by B attribute. As a result, using different value
of B attribute, Set A will be divided into many condition
5 3,4,6,23,24, 27,28,31,32,33,35,36,38
equivalence class [ s ]B and decision equivalence class [ s ]D .
Building a Resolution Matrix M, each element is 6 3,4,6,18,23, 24,27,28,31,32,33,36,39
{ a | a∈ A ∧ f ( si ) ≠ f ( s j )} [ si ] d ≠ [ s j ] d
M ij = { 0 [ si ] d = [ s j ] d 7 3,4,6,10,23,24,27,28,31,33,35,36,37,39
composed of different elements in equivalence
8 3,4,6,23, 24,27,28,31,32,34,35,36,37,39
class [ si ]d and [ s j ] d . When M ij only has one element, these
M ij constitute a set called Core( A) . If attribute set 9 3,4,6,10,23, 24,27,28,31,33,35,36,37,38
B⊂ A and meet the following conditions:
B ∩ Core( A) ≠ φ we said the B is reduction of A. In 10 3,4,6,10,12,18,23,24,27,28,31,32,34,35,36,37,38

other words, this reduction is the smallest subset of the


11 3,4,6,10,12,14,23,24,27,28,31,32,34,35,36,37,38
attributes; it can distinguish all objects that can be
distinguished with A.
Compared with reduction result in paper [10], our
2.2 Reduction intrusion data reduction result using Rough set have fewer features, and
can be dealt with easily. Finally the experimental results
Kddcup'99[8] is experimental data that simulate five major
show that our reduction can maintain a higher accuracy, but
categories attacks including the 23 kinds in a real network
faster.We can see that the features of the data from about 41
environment to be used for data mining. 10% data subset
to 13, 14 and 17. For BP neural network, based on 41
has 494,021 records, each of which has 41 features,
features of the establishment of the network structure needs
containing continuous, discrete and text variables.
41 * m * 1 (m is the number of hidden layer nodes) weights
Additional symbol to records show whether it is normal or
assumptions space, and now only needs 14 * m '* 1 (m' for
abnormal. The data set is typical heterogeneous data sets
the hidden layer nodes number), in which hidden layer
with multi-protocol and multiple attacks
input-output node rely on the output and input the number
We selected 30,000 records from the data; the normal
of node, the more input and output nodes is, the more
data have 12802 records, the abnormal data 17198 records
hidden layer is, so m '<m. Clearly, the network structure has
(DoS, 16560 records、 Prob, 442 records、 R2L, 188
been simplified, experiment in our paper also shows that can
records、 U2L, 8 records). Each data record contains 41
improve training and convergence speed.
features from the TCP / IP connections, in which three
features (protocal_type, service, flag) is a text variable. All
text firstly was dealing with number variables and for equal 3. Optimization BP neural network using GA
to the data, all the features variables were deal with
normalization. Assume n is totaled number of records, for i 3.1 Genetic Algorithms
feature of the P records x pi , the adoption of the next Genetic algorithm [11] (or GA) is a global search algorithm
proposed by Holland (Holland) in 1965.The basic idea of
normalization formula to [-1, +1]: GA is: randomly generated several solutions called as group,
~ x − min( xi ) element of which is individual or chromosome carrying the
x pi = 2* −1,(i = 1,..., 41, p = 1,..., n)
max( xi ) − min( xi ) gene known as digital coding, through computing the fitness
evaluation of the chromosome, eliminate the low degree of
Where min( xi ) is minimum of i feature, max( xi ) is individual, choose of the individual with high degree to
maximum of i feature. After normalized 30,000 records, add crossover and mutation, get a new generation of population.
the 42 attributes, normal marked "+1", the attacks marked "- So evolved, after several iterative or an average fitness has
(IJCNS) International Journal of Computer and Network Security, 97
Vol. 1, No. 2, November 2009

been convergence, output the optimal individual, the genetic, on the other hand, ensure the diversity of
individual is the optimal solution of the problem. chromosomes. To this end, we have chosen to gambling and
preferred strategy. In accordance with the proposed formula
3.2 GA optimizing BP neural network
by Michalewicz [12] for calculating choice probability of
Genetic BP neural network encode the BP neural network
pi individual,
weights, randomly generated population, use GA to
optimize population and search of the optimal value of the We use eval ( p j ) = β (1− β ) j−1, β ∈(0,1), j = 1,2,L, pop . Where j
network, this process including setting BP network is fixed good serial number, β adjust parameters. Then
parameter, generation populations, calculation fitness, and
retain the best chromosome, individual choice by gambling,
setting the operator.
the use of crossover probability to cross, the use of cross-
mutation probability for the mutation individual
 Setting BP network parameter
BP neural network parameters include the
 Genetic BP neural network algorithm
determination of neuron number of input layer, hidden layer
According to the above discussion and analysis, we
and output layer, and the number of iterations. These
give the genetic BP neural network algorithm as follows:
parameters are closely related to problems and the training
Step1:: According to problems, set the neuron number
data. This paper establish the BP network intrusion
of BP network input layer, hidden layer and output layer,
detection model using of GA-based BP neural network, the
and set the number of iterations, population size pop, as well
input is Kddcup'99 data, output is {+1, -1}.
as crossover probability Pc, the mutation operator Pm;
We determine these parameters on the basis of analysis
Step2:: Calculate weight vector according to BP
of the problem and data reduction. Determination of hidden
network layers, the bit of weight vector is the length of
layer neural network in accordance with the empirical
binary coding chromosome, randomly generate pop-weight
formula m = n + L + a , where n is the number of nodes vector, as well as threshold values, constitute the initial
in the input layer, and L output layer and a constant population;
adjustment. Step3:: Input training data, carry out BP network
For the 41-dimensional characteristics, the selected training, compute the fitness of each individual i of
nodes for the hidden layer are 6. we Selected 8th feature set population by the formula of calculation fitness;
, that is, the importation feature is 14 features, hidden Step4:: Sort of fitness according to grow from small to
nodes is 4, whether or not Reduction data, output layer large[17], calculate choice probability of individual by the
neurons are one. formula of calculating choice probability of pi individual.
Step5:: Retain the best chromosome, and choose
 generation chromosome and populations
individual with gambling method.
Genetic BP neural network algorithm designed to
optimize BP neural network weights; chromosome coding is Step6:: Use crossover probability Pc to choose of the
the coding of network weight value. When the BP network individual, and use of cross-mutation probability Pm for the
parameters are set, the connection between the networks individual mutation
also will be determined at this time, according to BP Step7:: If satisfy the evolution algebra or stetted error,
network layers, get the number of weight vector, and output the individual optimal, or else go to step 3.
randomly generated number of vector and threshold, which
constitute a population, a weight vector is a chromosome. 4. The experimental results and analysis
Chromosome encoding has binary, decimal, and other Experiment 1: comparison performance of reduction data
methods, here, we use of binary code. Other parameters are with no reduction data. We use 30,000 TCP records with no
as follows: population size n = 40, mutation rate Pm = 0.1, reduction and reduction as experimental data to train
the cross-rate Pc = 0.6, iterative algebra 100, the threshold standard BP neural network. In the same iterative step,
value and the weight to initialize -25 ~ 25. comparison of the time-consuming is as shown in table 2.
Table 2 corresponding error curves are shown in Figure 1,
 calculation fitness 2,3,4,5.
Calculation fitness is the core of GA-based BP neural
network optimization, by which BP network evolutes and Table 2: Time-Consuming Comparison
convergence. Here, Fitness refers to model error that is Training Training time using Training time using no
comparison the out of BP neural network training model step reduction data reduction data
with the real label. We set fitness the overall error, which is 100 12.1170 22.5320
the output error sum of the mean:
300 35.9120 65.6150
fit (i) = ∑∑ (d l − ol ) .where s is the number of input
s l
500 59.3560 110.5990
samples; l is the number of output layer neurons. 800 96.3390 173.7900

 setting the operator 1000 119.6120 217.8840


According to fitness, genetic operators select the
chromosomes, and make crossover, on the one hand, these
genetic operators ensure of fine individuals that it can be
98 (IJCNS) International Journal of Computer and Network Security,
Vol. 1, No. 2, November 2009
The comparison of error curves with Iterative 100 epochs
3.5
Reduced data with Rough Set(RS+BP)
Therefore, using rough set to reduce the intrusion data
3 Data without reduction(BP)
do not affect performance of error convergence but can
mean square error
2.5 enhance training speed of neural network, this can rapid
2 establish anomaly detection model.
1.5 Experiment 2: comparison performance of GA+BP
1 with BP. We randomly selected 1,026 samples from 30,000
0.5 reduction data, training these data by BP neural network
0
0 20 40 60 80 100 120 and GA optimizing BP network, mean-square error curve is
epoch
shown in Figure 6.
Figure 1. Error curve with 100 epochs
The comparison of mean square error curves between GA-BP and BP
3
The comparison of error curves with Iterative 300 epochs
1.4 GA+BP
Reduced data with Rough Set(RS+BP) BP
2.5
1.2 Data without reduction(BP)

1 2

mean square error


mean square error

0.8
1.5

0.6
1
0.4

0.5
0.2

0 0
0 50 100 150 200 250 300 350 0 20 40 60 80 100 120
epoch epoch

Figure 2. Error curves with 300 epochs Figure 6. the comparison of mean square error curves
between GA-BP and BP
The comparison of error curves with Iterative 500 epochs
2.5
Reduced data with Rough Set(RS+BP)

2
Data without reduction(BP)
Figure 6 show that error curve of the standard error of
BP Network dropped gentler, but the curve of GA optimized
mean square error

1.5
BP dropped steeply and rapidly and the ultimate
1 convergence error is smaller than the standard BP network.
In order to investigate the generalization ability of Rough
0.5
Set and GA Optimizing BP Neural Network, on the basis of
0
0 100 200 300 400 500 600 above two training models we test 7 data sets with different
epoch
size, test results are in Figure 7,8,9,10,11,12,13.
Figure 3. Error curves with 500 epochs
There are 82 tested records,in which 2 are tested wrong,the seizure rate of error is 2.439 %
4
The comparison of error curves with Iterative 800 epochs
3.5 aim
3 out
Reduced data with Rough Set(RS+BP)
3 Data without reduction(BP)
2

2.5 1
aim and out
mean square error

0
2

-1
1.5
-2
1
-3

0.5 -4
0 10 20 30 40 50 60 70 80
records
0
0 100 200 300 400 500 600 700 800 900
epoch Figure 7. Test 82 samples with RS+BP
Figure 4. Error curve with 800 epochs
There are 82 tested records,in which 1 are tested wrong,the seizure rate of error is1.2195 %
4
aim
The comparison of error curves with Iterative 1000 epochs 3 out
14
Reduced data with Rough Set(RS+BP) 2
12 Data without reduction(BP)
1
aim and out

10 0
mean square error

-1
8
-2
6
-3

4 -4
0 10 20 30 40 50 60 70 80
records
2
Figure 8. Test 82 samples with GA+RS+BP
0
0 200 400 600 800 1000 1200
epoch
There are 338 tested records,in which 17 are tested wrong,the seizure rate of error is 5.0296 %
4
Figue 5. Error curve with 1000 epochs aim
3 out

Comparison of the above five results , it can be seen, 1


aim and out

mean square error of reduction data is slightly larger than 0

the error of no reduction data, because the data reduction -1

-2
loss of a small amount of useful information, but with the
-3
number of iterative steps increase, the convergence of two -4
0 50 100 150 200 250 300
curves gradually become the same. records

Figure 9. Test 338 samples with RS+BP


(IJCNS) International Journal of Computer and Network Security, 99
Vol. 1, No. 2, November 2009
There are 338 tested records,in which 6 are tested wrong,the seizure rate of error is 1.7751 %
4 misuse rate of GA+BP+RS is greatly smaller than that of
aim
3 out BP+RS From the above eight maps, distribution of blue
2 pentacle of GA + RS + BP's is relatively neat and more close
1 to the desired output, but distribution of the red star (the
aim and out

0 actual output of RS + BP) is scattered, particularly as sample


-1 number is growly, this trend is more obvious. This shows
-2
that the genetic algorithm optimizing BP network not only
-3
has higher detection accuracy rate, and has been greatly
-4
0 50 100 150 200 250 300 improved network generalization ability.
records
Experiment 3: Influence of sample size on GA+BP
Figure 10. Test 338 samples with GA+RS+BP
with BP performance. Below we use the GA+RS+BP to
There are 966 tested records,in which 59 are tested wrong,the seizure rate of error is 6.1077 % training different samples, training the model is used to
4
aim detect with a fix test sample with 101 samples, our aim is to
3 out
study Influence of sample size on detection rate of
2

1
GA+RS+BP ,the experimental results are shown in Table 3.
aim and out

-1
Table 3: Influence of sample size on detection rate of
-2
GA+RS+BP
-3
Training False Positive Probobility(%)
-4
sample
0 100 200 300 400 500 600 700 800 900
records 25 3.9604
Figure 11. Test 966 samples with RS+BP 83 1.9802
147 1.9802
There are 966 tested records,in whic h 13 are tested wrong,the seizure
4
rate of erro r is 1 .3458 % aim
296 0.9901
3 out
463 0.9901
2
708 1.9802
1
aim and out
0
1026 0.9901
-1 4453 0.9901
-2

-3
When the number of training samples increases, through
-4
0 100 200 300 400 500 600
records
700 800 900 study the network have the rich “experience”, the error rate
Figure 12. Test 966 samples with GA+RS+BP drop, and when training samples increase to a certain
degree, the detection accuracy rate is stability.
There are 5226 tested records,in which 298 are tested wrong,the seizure rate of error is 5.7023 %
4
aim
3 out 5. Conclusion
2

1
This paper use rough set to reduce 41 features of Kddcup'99
aim and out

0 data sets to 14, simplify BP network structure, and improve


-1 BP neural network convergence and establishment rate of
-2 anomaly detection model. Using GA global search capability
-3
optimizing the BP neural network weights reduce the
-4
0 500 1000 1500 2000 2500 3000 3500 4000 4500 5000 optimal time to find the right value, and improve the
records
generalization ability of the model to more accurately detect
Figure 13. Test 5226 samples with RS+BP
anomalies. Three experiments based on Kddcup'99 intrusion
There are 5226 tested records,in which 86 are tested wrong,the seizure rate of error is 1.6456 % data show that the use of rough set for intrusion data
4

3
aim
out
reduction can improve performance neural network training
2
speed and rapid establishment of anomaly detection model
1
under the premise of error can not be affected. When
aim and out

0 combining with the use of rough set and the standard BP


-1 network, the error curve more gentle decline, but joint using
-2 of rough set with GA optimizing BP, the error curve
-3
relatively steep, and the ultimate convergence error is
-4
0 500 1000 1500 2000 2500
records
3000 3500 4000 4500 5000 smaller than the standard BP .The genetic algorithm
optimizing BP network not only have higher detection
Figure 14. Test 5226 samples with GA+RS+BP
accuracy rate and has greatly improved network
generalization ability. When the number of training samples
The red star is the expectations for network output
increase, joint using rough set and the GA optimizing
(strictly for the +1 or -1), a blue five-pointed star is the
network BP obtain more rich experiences through learning,
actual output for the network, if the difference between the
error detection rate decline, when the test set is unchanged,
network actual output and expectations of the absolute value
training samples is add to a certain degree, the detection
of the output> = 1, we believe that a misjudgment, by which
accuracy rate is stability.
we calculate error rate. Comparison results is that the
100 (IJCNS) International Journal of Computer and Network Security,
Vol. 1, No. 2, November 2009

References
Author Profile
[1] Webopedia. “Intrusion detection system” [Online].
Available: REN Xun-yi received the B.S.in computer science from
http://www.webopedia.com/TERM/I/intrusion_detectio University Of Science and Technology Beijing in 1998. He
n_system.html. [Accessed: Sept. 12, 2008]. received M.S. degree in Management Science and his Ph.D.
[2] K.Fox. “A Neural Network Approach towards degree in computer science (information network) in Nanjing
Intrusion Detection”. In Proceedings of the 13th University of Posts and Telecommunications. His research
National Computer Security Conference, pp. 125-134, interests include grid computing, information Security ,Computer
1990. software theory.
[3] Endler D. “Intrusion Detection Applying Machine
Learning to Solaris Audit Data”. ACSAC, pp. 268-
279, 1998.
[4] K.Anup. “Analyzing the Performance of Program
Behavior Profiling for Intrusion Detection”.pp.
DBSec,pp. 19-32,1999.
[5] C. Jirapummin, N. Wattanapongsakorn, P. K.
anthamanon. “Hybrid Neural Networks for Intrusion
Detection System” The 2002 International Technical
Conference on Circuits/Systems, Computers and
Communications (ITC-CSCC), PP. 928-931,2002.
[6] J. M.Bonifacio, A. M. Cansian, A. C. Carvalho.
“Neural Networks Applied in Intrusion Detection
System” In Proc of IEEE World Congress on
Computational Intelligence(WCCI’98), pp. 205-210,
1998.
[7] Z.Pawlak. “Rough sets”. International Journal of
Information and Computer Sciences, 11(5), pp. 341-
356,1982.
[8] KDD Cup 1999 Data. [Online]. Available:
http://kdd.ics.uci.edu/databases/Kddcup99.htm
[Accessed: October. 10 , 2008].
[9] J. O. Komorowski, “ROSETTA—A rough set toolkit
for analysis of data”. In Fifth International Workshop
on Rough Sets and Soft Computing, PP.403-407,1997.
[10] A. Sung, “Ranking Importance of Input Parameters of
Neural Networks”, Expert Systems with Applications,
15(3): pp. 405-411,1998.
[11] L. Davis, Handbook of Genetic Algorithms, New York:
Van Nostrand Reinhold, 1991.
[12] Z. Michalewicz. Genetic Algorithm+Data
Structure=Evolution Programs, 2nd. New York:
Springer- Verlag, 1994.
(IJCNS) International Journal of Computer and Network Security, 101
Vol. 1, No. 2, November 2009

A modular network interface adapter design for


OCP compatibles NoCs
Brahim Attia, Abdelkrim Zitouni, Noureddine Abid , Rached Tourki

Laboratory of Electronic and Micro-Electronic (LAB-IT06)


Faculty of Sciences of Monastir, 5019, Tunisia
Attia_brahim@yahoo.fr
[abdelkrim.zitouni, rached.tourki]@fsm.rnu.tn
http://www.fsm.rnu.tn

Abstract: The idea of using on chip packet switched networks design can be easily reused for a new product, NoC provides
for Interconnecting a large number of IP cores is very practical high possibility for reusability.
for designing complex SoCs. However, the real effort and time Networks are composed of routers, which transport the data
in using these Networks on Chip (NoC) go in developing from one node to another, links between routers, and NoC
interfaces for connecting cores to the on-chip network. Interfaces (NI), which implement the interface to the IP
Standardization of interfaces for these cores such OCP Open
modules.
Core Protocol) can speed up the development process.
In this paper, we present an approach of developing Modular
One of the key components for on-chip networks is the
Network Interface architecture for a best effort NoC. We present wrapper for different IP cores in the tiles [6]. Since different
how we can decoupling between computation and reusable IP cores may not be developed based on the on-chip
communications to achieve the IP modules and interconnections network, a wrapper is required as the interface between the
to be designed independently from each other. To validate this IP core and its associated router.
approach, we use OCP IP standard at the IP side and we use the NI must provide services at the transport layer in the ISO-
most two used flow control in NoC. We have opted for the most OSI reference model [3], because this is the first layer where
used data types (burst precise/imprecise and Single offered services are independent of the network
Request/Multiple). As far as the network is concerned, we can implementation. This is a key ingredient in achieving the
use either handshake or credit based control flow. Experimental decoupling between computation and communication [4, 5],
results show that the proposed design Network Interface is
which allows IP modules and interconnects to be designed
characterized by good performance criteria’s in terms of area,
independently from each other.
power and speed.
There exist a number of socket specifications to this end,
Keywords: OCP, SoC, NoC, NoC Adaptor, NoC Interface. such as VCI (Virtual component Interface) [7], OCP (Open
Core Protocol) [8] and AXI (Advanced eXtensible
Interface) [9]. Since most NoCs are message-passing by
1. Introduction
nature, an adapter is needed.
There is a number of works published on the design of novel
A big challenge of current and future chip design is how to network architectures, but few publications have addressed
integrate components with millions of transistors and make particular issues to the design of an adapter module. In [12]
them operate efficiently. System-on-chip (SoC) designs an adapter implementing VCI standard interface was
provide such an integrated solution to various complex presented for the SPIN NoC. In [13] an adapter
applications. One of the key issues of SoC designs is the implementing standard sockets was presented for the
communication architecture between heterogeneous Æthereal NoC. The adapter however has quite a high
components. forward latency. In [14] an OCP compliant adapter for the
Most of the communication architectures in current SoCs ×pipes NoC was touched upon. The adapter has a low area
are based on buses. However, the bus architecture has its but it supports only a single outstanding read transaction. In
inherent limitations [16], [17], [18]. For today and next- [15] an OCP compliant adapter for Asynchrony NoC is
generation SoC design, the wiring delay, noise, power presented.
dissipation, signal reliability and synchronization are far In this paper we present an OCP compliant NoC Aadapter
more serious than ever. (NA) for the mesh 2D NoC. The NA is a key component of
A packet-switched network which delivers messages a modular SoC design approach. Our contributions include
between communicating components has been proposed as identifying key issues of NA design and developing an
the solution for SoC design. Such network-on-chip (NoC) efficient NA architecture.
provides a high performance communication infrastructure. We evaluate the area and performance overheads of
NoC is a new paradigm for integrating a large number of implementing NA tasks for NoC that use credit based or
IPs cores for implementing a SoC [1-2]. In NoC paradigm a handshake flow control.
router-based network is used for packet switched The proposed NA is designed as a bridge between an OCP
communication among on-chip cores. Since the interface and the NoC switching fabric. Its purposes are the
communication infrastructure as well as the cores from one
synchronization between OCP and NoC timings, the
102 (IJCNS) International Journal of Computer and Network Security,
Vol. 1, No. 2, November 2009

packeting of OCP transactions into NoC flits and vice


versa, the computation of routing information, and the
buffering of flits to improve performance.
The NA is designed to comply with version 2.2 of the OCP
specifications and provide read/write-style transactions into
a distributed shared address space. In the presented
prototype implementation the following OCP-transactions
are supported: single reads and writes, burst precise (BP) Figure 1. Packet definition
reads and writes, burst imprecise (BI) reads and writes and
burst single request with multiple data transfers (SRMD). Hence, the routing path is limited to eight hops H0 to H7.
The paper is organized as follows. In Section 2, an overview However, a path extension mechanism can be also proposed
of NoC is given. In Section 3 we describe the basic to extend the routing path.
functionality of an NA. Section 4 details the architecture of A network packet is composed of successive flits. A multi-
proposed NA. Section 5 presents some experimental results. flit packet is inserted by inserting a header flit, which may
Finally in section 5 we conclude the paper. be followed by one or more data flits (payload). The first and
second flits of packet are header information for our case.
2. Network-on-chip Overview Each flit is composed of 32 bit data and two control bits,
where the 34th bit encodes the beginning of-packet (BOP)
NoC topologies are defined by the connection structure of
and the 33th bit encode the end-of-packet (EOP).
the switches. We are currently developing a NoC which is
The header is composed of special fields for the network and
based on the mesh 2D topology. The proposed NoC assumes
special fields for adapters and IP. Special fields for adapters
that each switch has a set of bi-directional ports linked to
and IP will be discussed later.
other switches and to an IP core. In the mesh topology used
in this work, each switch has a different number of ports, 3. Network Interface Functionality
depending on its position with regards to the limits of the
network. The use of mesh topologies is justified to facilitate Our NoC offers a shared-memory abstraction to the IP
placement and routing tasks. We have adopted a modules. Communication is performed using a transaction-
synchronous router with five input/output ports (North, East, based protocol, where master IP modules issue request
Local, South and West), having each a bi-directional messages (read and write commands at an address, possibly
exchange bus suitable for 2D mesh NoC architecture. The carrying data) that are executed by the addressed slave
NoC includes 20 nodes and the switching technique used is modules, which may respond with a response message (i.e.,
packet switching. The data flow through the network is a status of the command execution, and possible data) [11].
wormhole routing [10]. This has been chosen thanks to the We adopt the OCP protocol because it becomes an industrial
small number of buffer required per node and the simplicity standard.
of the communication mechanism (no re-ordering at the The way for interfacing the NoC with the tile is to utilize a
destination resource). In this routing, the header of packet NA, which would have the responsibility of packetizing and
opens the path between the source and destination units, depacketizing the cores requests and responses. The NA has
while the successive data spread along the path and nodes. the responsibility of (i) receiving the contents from the core
When the end-of-packet information is received, the packet interface, preparing the packets and dispatching them to the
path is closed and this frees the communication resource for network logic of the tile and (ii) receiving the packets from
following packets. This routing mechanism is implemented the networking logic and presenting the contents to the core
by the network nodes. interface.
NoC implements either handshake or credit based flow The NA is designed as a bridge between an OCP interface
control strategies. The implementation employs credit based and the NoC switching fabric. Its purposes are the
flow control which has the advantages over handshake. We synchronization between OCP and NoC timings, the
have adopted a source specified routing and the lowest 2 packeting of OCP transactions into NoC flits and vice versa,
header bits indicate the output port where the packet is to be the computation of routing information, and the buffering of
transmitted. Each node uses the two LSB bits and shifts the flits to improve performance. The NA is designed to comply
path to target field for the following node. We assign to with version 2.2 of the OCP specifications. In addition to the
every port an address as presented in table 1. Since back and core OCP signals, the support includes for example the
forth within a node is forbidden, to address the resource ability to perform posted writes (writes without response)
port, the incoming packet port number is used. and for the first mode BP transactions, the seconded BI and
Source routing is used to minimize the congestion on some the last SRMD. This allows for thorough exploration of
links, and thus reduces the packet latency. The first flit of bandwidth/latency tradeoffs in the design of a system.
the packet contains the routing information and the router
uses this “path-to-target” to decide the correct routing 4. Network Interface Architecture
destination. A flit is the smallest flow control unit of the
network. The routing information is enclosed on 16-bits and We have designed two types of NAs for OCP based cores in
two bits encode each routing hop as shown in Figure 2. the proposed NoC in each mode, named MNA (Master
Network Adaptator attached to system masters) and SNA
(attached to system slaves). A master-slave device will need
two NAs, an initiator and a target, for operation. Each NA is
(IJCNS) International Journal of Computer and Network Security, 103
Vol. 1, No. 2, November 2009

additionally split in two sub modules, one for the request Response packages are always sent from slave cores to
and the other is for the response channel. These sub- master cores.
modules are loosely coupled: whenever a transaction
4.2.1 Request package header
requiring a response is processed by the request channel, the
response channel is notified. Whenever the response is The package header contains vital information that is
received, the request channel is unblocked. The mechanism needed in order for the Master NA to issue the request,
is currently supporting only one outstanding non-posted return the response and manage the network services.
transaction, but can be extended if any attached core needs The request package header is shown in Figure 2 and
this feature. spans over two flits.
OCP Protocol functions according to various modes.
Among these modes, we note the burst precise, imprecise
mode or SRMD, these modes can support the handshake
signals or not.
The advantage gained by using burst transfers is that the
bandwidth is used more effectively, since it is only necessary
to send the starting address together with some information Figure 2. Request package header
about the burst. The longer the burst is the better ratio The fields in the request package header are:
between data and overhead gets. Another advantage is that The field Path to target that performs the packet routing
the jitter between data flits decreases when adding a burst from one source unit to a destination units.
header to the package, since many flits of data can be sent in The fields MCmd, MBlength, BSeq, MBprecise,
sequence. MBsingreq are control signals of OCP protocol.
To take advantage of burst transactions the NA needs to The field Destination address defines the “local address” in
package a burst in a package to transmit over the network. slave IP and Source address defines global address of
However, if a very long burst is packaged into one package, source router in NOC.
the burst can block a slave core from receiving request from Reserved: Reserved for The thread id for identifying the
others cores. OCP transaction.
In OCP there are three different burst models. (i) Precise 4.1.2 Response Package Header
burst: in this model, the burst length is known when the The response package header contains vital information
burst is sent. Each data-word is transferred as a normal about the response as shown in Figure 3.
single transfer, where the address and command are given
for each data-word, which has been written or read. (ii)
Imprecise burst: in this model, the burst-length can change
within the transaction. The MBurstLength shows estimation
on the remaining data-words that will be transferred. Each Figure 3. Response package header
data-word is transferred as in the precise burst model, with The field Path to target performs the packet routing from
the command and address sent for every data-word. (iii) one destination unit to a source unit.
Single request multiple data burst: In this model, the Sresp is the response field from the slave to a transfer
command and address fields are only sent once. That is in request. The remainder filed is reserved for further
the beginning of the transaction. This means that the expansions of the features and services of the NAs.
destination core must be able to reconstruct the whole 4.2 Master Network Adapter
address sequence, based on the first address and the The tasks of the Master NA are to receive requests from the
MBurstSeq signal. master core, encapsulate the request into a package. It also
In [14] and [15], authors present the implementation details transmits packages to the network, receive response from
of Adapter allowing support for single reads and writes, the network, decapsulate response and transmit response to
single-request burst reads and writes. In our case, we have master core.
designed a NA for each burst mode with credit based and Master NA architecture is built on two data-flows. One data-
handshake flow control mechanisms. flow is the request data flow, where the core is the source
4.1 Package Format Specifications and the network is the destination.
The package format is an essential part of designing the The second dataflow is the response data-flow where the
NAs. The package format is reflected in the implementation network is the source and the core the destination.
of the encapsulation and decapsulation units in the NAs. It In component based design, generic wrapper architecture is
has been specified that a package is constructed by flits made of two main parts, the module specific part called
which are 32-bit wide and the flits sent on the network must Module Adapter (MA) and the network specific part called
be applied with extra bit to indicate the end of a package. Channel Adapter (CA). The two parts are interconnected
In order to keep the design complexity low, we try to reduce through an internal bus (IB). In our case some approaches
the package types to a minimum. This makes the are used where module adapter (MA) is a core interface
encapsulation and decapsulation of requests and responses (CI), channel adapter (CA) is a Network interface (NI) and
simpler. We define two package types for the NA layer in the internal bus (IB) is a FIFO for temporary storage of data.
our NOC protocol stack:
i) Request package, and ii) Response package. Request
packages are always sent from master cores to slave cores.
104 (IJCNS) International Journal of Computer and Network Security,
Vol. 1, No. 2, November 2009

4.2.1 Request data-flow use the same implementation for the two types of control of
flow. For the request data flow using credit-based control
Within the wrapper Figure 4, several communications flow, the NI request transmit implementation is the same for
between modules proceed; the modules constituting this the three burst modes. Also for the request data flow using
entity are described as follows: handshake control flow, the NI request transmit
implementation is the same for the three burst modes.
Routing table: routing table is a local memory in the
master NA. It stores the route paths to others slaves cores in
4.2.2 Response Data-flow
the NoC. This route path is needed as part of the packet
header, since all packets are source-base routed. This means
The Response data-flow is also divided into three stages.
that all the routing information is stored in the path to target
The first stage is where the data are received by the NI
field which shows the routing nodes where to route the
response Receive Module from network via the NI. The
packet at each hop. The Routing table is not globally
second stage is FIFO TX, where the data are temporary
memory mapped and cannot be addressed by other cores.
stored. The third stage is where the data are transmitted to
The table is configured and the entries are set at NA
the master core by the OCP Response Module within the
instantiation time.
Response Data-flow. In Figure 6, several communications
between modules proceed. The NI Response Receive Module
presents the Network interface (NI) or channel adapter
(CA), OCP Response Module is a core interface (CI) or
module adapter (MA) and FIFO presents the internal bus
(IB).
These modules constituting this entity are described as
follows:
NI response Receive Module: it is the synchronizer
between the NA and the network. It receives the package
flits from the network using the handshake or credit based
protocol, and writes the package flits to a FIFO. At the time
of the reception of the data on the bus Data, this module
starts to make a temporary storage of these data in the FIFO
to pass then to OCP Response Module. The writing of the
data is controlled by two signals: write and Full. If the FIFO
is full, then this module does not assert the ack signal from
low to high for the router (credit must be put at 0 for credit
Figure 4 . Architecture of MNA Request data-flow
based). For the writing of the last data in the FIFO, the
Header builder: it takes in entry some essential signals signal Last Data emitted towards OCP Response Module is
OCP during the transfer and the field which shows the path put at 1 so that this last knows the number of data
to the target. It carries out the operation of encapsulations remaining in the FIFO.
and provides out to the NI request transmit 2 flits of header
for each package.
Control FIFO: it is responsible for the management of the
writing in the FIFO. It can also put an end or suspend the
writing if it receives a high state on the signal full of FIFO.
When a data is well written in the FIFO and FIFO are not
full, then the controller is ready to accept any request, so it
asserts SCmdAccept.
FIFO-RX: this module is designed for the temporary
storage of the data. The writing is performed by the control Figure 5. Architecture of MNA Response Data-flow
FIFO. The reading is performed by the NI request transmit
module. FIFO-TX: The FIFO is a first in, first out memory. The
NI request transmit: it is the synchronizer between the NA FIFO is implemented with asynchronous read and write.
and the network. It receives package flits from the “Header When a read or write operation is issued, the operation must
builder” or from the FIFO- RX and sends it out from the NA be validated first. If the FIFO is empty a read cannot occur,
to the network. Then it transmits the flits to the network and if the FIFO is full a write cannot occur. If an operation
using the four-phase handshake push or credit based is not valid, the operation is ignored. The FIFO depth and
protocol. data width are implemented as generics, which are defined
OR gate: this component takes in entry 2 signals. The first when the module is instantiated.
comes from the module controls FIFO (in the case of a write
request) and the second is that of the header builder (in the OCP Response Module: its task is to transmit the response
case of a read request). back to the master core. This module handles the response
We have designed for each mode of the burst, a specific phase of the OCP. This module reads the data from FIFO,
implementation of header builder and control FIFO but we and then transmits them to the master IP. The passage of the
(IJCNS) International Journal of Computer and Network Security, 105
Vol. 1, No. 2, November 2009

data towards the IP is accompanied by other signals such as


Sresp which indicates the nature of the response. After
having received a data, the master IP discharges it by
putting the MrespAccept signal at the high state. If the
received data is the last then OCP Response Module put the
SrespLast signal in a high state. The OCP Response Module
implementation is the same for the three burst modes and
for the two implementations using handshake and credit
based control flow. We have designed a specific Response
Receive Module specific for each flow control using
handshake and credit based.
Figure 6. Architecture of SNA Request data-flow
Routing table: Routing table is another local memory in the
4.3 Slave Network Adapter NA. It stores the route paths to other master’s cores in the
NoC. This route path is needed as part of the response
The tasks of Slave NA are to receive request packages from
packet header, since all packets are source-base routed. This
the network, decapsulate the request packages, transmit the
means that all the routing information is stored in the path
request to the slave core, receive response from the slave
to target field which shows the routing nodes where to route
core, encapsulate response and transmit response to the
the packet at each hop. The table is configured and the
network. Slave NA architecture is built on two data-flows.
entries are set at NA instantiation time.
One data-flow is the request data flow, where the network is
We have designed for each mode of the burst, a specific
the source and the slave core is the destination. Another
implementation of OCP Request Module but we use the
data-flow is the response data-flow where the slave core is
same implementation for the two types of control of flow.
the source and the network is the destination.
For the request data flow using credit based control flow, the
4.3.1 Request data-flow NI Receive request Module implementation is the same for
the three burst modes. Also, for the request data flow using
The Request data-flow is divided into five stages. The first
handshake control flow, the NI Receive request Module
stage is where the data are received and decapsulated by the
implementation is the same for the three burst modes.
NI Receive request Module from network via the NI. The
second stage is where the data are buffered by the FIFO
data. The third stage is where the data are transmitted to the 4.3.2 Response data flow
slave core by the OCP Request Module via the CI. The
After each reading command, a phase answer always
fourth stage is where the address source for reading request
follows the phase request, therefore one needs a block to
is buffered by the FIFO @ source. The five stages are the
manage this phase of answer, and this block is Response
routing table.
Data-flow.
The request data flow receives the requests sent by the
The response data-flow is also divided into three stages. The
master IP and regenerates the requests for slave OCP
first stage is where the data and response are received by the
(memory, controller E/S). In the case of a reading request, it
OCP Response Module from slave core via the OCP
provides the way to the router source for the FIFO @ source.
interface. The second stage is where the data are buffered by
Within the request data flows (Figure 7), several
the FIFO. The third stage is where the data are transmitted
communications between modules proceed, and the modules
to the network by the NI response Transmit Module via the
constituting this entity are described as follows:
NI.
NI Receive request Module: OCP Response Module: The OCP Response Module
The role of this block is to extract the various fields controls the interface of slave OCP. This module uses the
necessary from the header for the reformulation of the OCP to receive response from the IP-OCP slave core and
request and send these fields to OCP Request Module. For a forward the response data to the FIFO by using the write
burst write, this block will make it possible to let a certain and full signals. This module has as a role to test the input
number of words of data to be written in the FIFO, this signal SResp of the IP Slave OCP at each rising edge of
number of words is defined in the MBurstLength field. For CLK. If SResp = DVA (Data VAlide), thus the data SData
a burst read, there are only two flits of the header, and then of the IP-OCP will be written in the FIFO and the
there are not data to receive in the FIFO. MRespAccept signal, which indicates that the block
emission accepted the answer, will be put at 1. This
OCP Request Module: its task is to transmit the request to
controller will also test the input signal SRespLast which
the slave core via CI. This module handles the request phase
indicates that the response provided by the IP-OCP slave is
of the OCP.
the last.
FIFO @ source : this module is designed for the temporary
storage of the addresses source issue of the NI Receive
request Module, on waiting to be intercepted by the NI
response Transmit Module which belongs to Response
Data-flow in the case of a reading command.
106 (IJCNS) International Journal of Computer and Network Security,
Vol. 1, No. 2, November 2009

7. Table 8 shows the measurement of jitter obtained by the


simulation of the two versions of the NAs.

5.1 Cost analysis:


NA area: The size of the NAs is an important metric
because it facilitates calculating the interconnection
overhead introduced by the NoC. As a Slave NA and or a
Master NA should be instantiated for each IP core connected
to the network, it is desired that the area is small compared
to the IP cores. In table 1 we show that the area of credit
based NA implementation is Small then handshake NA
implementation.
Table 1 presents the area produced by the synthesis of the
MNA with BP, PI, and SRMD with Showing the FPGA
Figure 7. Architecture of SNA Response Data-flow
resource used. We show that the area occupied by BI mode
has a larger area compared to the other modes. On the other
NI response Transmit Module: This module has as a role
hand, SRMD occupies the most reduced surface compared to
of preparing and transmitting the header, the reading and
the other modes. This is valid for the control of flow in
the sending of the response data. The encapsulation of the
handshake and credit based.
header is done by the activation of the read source address
signal to have the Path to target field which will be
Table 1: Synthesis Results for area Master NA
transmitted with SResp and other fields. Then, this module
will send the response data already stored in the FIFO by the
activation of the Read signal. This wrapper will send all the
response data which were produced by the IP-OCP and
which was stored in the FIFO after each request for reading
starting from the FIFO.
The OCP response Module implementation is the same for
all possible configurations in IP side and NoC side. But, the
NI response transmit module is specific of flow control to be
used.

5. Experimental Results
The modes BI require the presence of operation of counting
In this section the synthesis results will be presented, and a and more test. Its Finite states machine is complicated
cost analysis of area and power consumption will be made compared to the other modes. On the other hand, SRMD
based on the synthesis results. The SNA’s performance and mode does not require the presence of operation of counting
MNA’s performance will be evaluated in terms of speed, and it has less of test compared to the others.
latency and jitter. We will present a comparative study of six When using an IP in BP or SRMD mode, the minimum area
implementations for NA. The first implementation of NA is obtained for the handshake rather than credit based.
uses burst precise mode in OCP side and a handshake flow When using an IP in BI mode, the minimum area is
control. The second uses burst precise (BP) mode and the obtained for the credit based rather than handshake.
credit based flow control. The third uses burst imprecise and
handshake. The fourth uses burst imprecise (BI) and credit Table 2: Synthesis Results for area Slave NA
based. The fifth implementation uses burst SRMD and
handshake, and the last uses burst SRMD and credit based
flow control. NAs with 32 bit OCP data fields and 32 bit
network ports have been modeled with VHDL language on
RTL level. They were simulated and synthesized
respectively by using the ModelSim[6] tool and ISE 10.1 [7]
tool. The proposed NAs were prototyped on Xilinx Virtex5
FPGA family device XC5VLX30, which has a capacity of
slice registers 19200 and number of bonded IOBs 220.
Table 1and table 2 shows the area of Master NAs and Slave
NAs slave for the six implementations. The power
consumption results are show in table 3 and table 4 .The
speed results are displayed in table 5. The maximum Table 2 present the area of Slave NAs. We show that the
operating frequency obtained for these NAs area occupied by SNA using handshake has a larger area
implementations is about 520 MHZ. compared to the credit based. The area of BP and BI is equal
The result of latency measurement by the simulation of in credit based.
Master NAs and Slave NAs is presented in Table 6 and table The two queues for MNA are 4 words deep and we can note
that for this NI instance, a large part of the area is consumed
(IJCNS) International Journal of Computer and Network Security, 107
Vol. 1, No. 2, November 2009

by the FIFOs (82% of NSR,30% of NSLUT and 32% of We display in table 5 the speed of Master and Slave NA for
LutFF). the three modes used By IPs OCP in handshake and credit
Power consumption: based. We show then the speed of credit based master NA is
In power consumption there are two main components, lower than handshake. The BI mode is faster than other
Dynamic and static components. The following equation has modes. The gain obtained in handshake mode is 21.5%
shown the dynamic power component: compared to credit based. We see that the BP and SRMD
mode have the same speed using Credit based. For the
MNA, the maximum operating frequency obtained for these
NAs implementations is about 520 MHZ en mode BI using
The first term denotes the dynamic power, α is the activity handshake.
of the circuit, C is the parasitic capacitance, Vdd the power Table 5: speed of tow implementation
supply and f the operating frequency. The dynamic
component is a quadratic function of Vdd and the sub-
threshold component is an exponential function of Vt which
makes it crucial for coming deep submicron technologies.
The power consumption results are from ISE Xilinx tool
(XPower) and are based on estimation where the frequencies
on all input ports are set to 200 MHZ and switching activity
estimation is done automatically by XPower tool.
We display in table 3 and table 4 the power estimation of When using the handshake for the Slave NA, the BI mode is
Master and Slave NA for the three modes used By IPs OCP the fastest while SRMD mode is the slowest. Using control
in handshake and credit based. flow credit based, three modes have approximately the same
When using the handshake for the Master NA, the BI mode speed. For the SNA, the maximum operating frequency
is the lowest while SRMD mode is the greater. When using obtained for these NAs implementations is about 381 MHZ
a credit based, the SRMD is the lowest and BP and BI are en mode BI using handshake.
the same. Latency:
Table 3: Master Power Estimate For Master Network Adapter the latency for write or read
request transaction is defined as the number of cycles
needed by Request data-flow when the request is presented
at the OCP interface to the time when the first flit of the
packet is set out of the NA.
The latency for read response transaction is defined as the
number of cycle needed by Response Data-flow when
response packet is presented at the local port of router to the
time when the first response is presented at the OCP
In handshake, the BP is the lowest and SRMD is the greater.
interface.
When using credit based, the BI is the greater and other are
Table 6: Master latency result
lowest.
Table 4: Slave Power Estimate

This is not a very precise estimation. In order to generate a


good estimation of power consumption, switching activity
estimation for all the input and output signals would be
conducted. It would have been interesting to evaluate the
Table 7: Slave latency result
power consumption results based on the energy used per
transaction. On other hand, one way to reduce the power
consumption is to apply clock gating to component that is
idle.

5.2 Performances studySpeed:


The speed of the Slave NA and the Master NA is presented
as the maximum frequency which the designs can run. The
maximum frequency can be calculated as: fm=1/C. C is the
longest combinatorial delay in the design i.e. the critical
path.
108 (IJCNS) International Journal of Computer and Network Security,
Vol. 1, No. 2, November 2009

For Slave NA, the latency for write or read request • L SNA is the latency of read response transaction of
transaction is defined as the number of cycles needed by slave NA measured in table 7 in response data
Request data-flow when request packet is presented at the flow.
local port of router to the time when the first request is • L MNA is the latency of read response of Master NA
presented at the OCP interface. measured in table 6 in response data flow.
The latency for read response transaction is defined as the • Pres is the size of response packet.
number of cycles needed by Response Data-flow when
response is presented at the OCP interface to the time when
the first flit of the response packet is set out of the NAS. Jitter in burst:
Master latency: For 4 words read and write requests, we For burst read and write, the jitter is measured as the latency
note that the latencies are identical for the 2 modes of flow between data-word in the burst in NOC side (the delta time
control. However, the latency of BI mode is large compared between two data-words). The jitter was measured for a
to other because we count the number of words to read write request, read request, read response and results are
before sent header to the NI Request transmits. The shown in table8. This is the same as the time it takes to
Equation cited bellow gives the minimal read request make the synchronization in the NI. We can see the
latency for BI mode, in clock cycles. disadvantage of using handshake 4 phases mechanism is
Minimal Read request latency BI (cycles) = N + 1 that it requires at least two cycles of clock to be carried out
In this equation: N is the numbers of read request provided and at least 2 cycles for jitter. On the other hand, it is simple
by the OCP IP. 1 is the time cycles for the header of packet to set up. As long as the receiver is free, the control of flow
to be transmitted to the NI Request transmit. The read credit based requires one cycle of clock to transmit a data
response latency is the same for the 3 modes that the three and zero cycle for jitter. These results are the same for BP,
modes use the same architectural structure (response data BI and SRMD.
flow) in flow control handshake. The read response latency Table 8: jitter result
varies in credit based because it depends on frequency of
OCP IP and router. For instance, if the OCP IP period is
equal to the router period, the read response latency is equal
to 7cycles. However, if the OCP IP period is equal to 2,5 of
the router period, the read response latency is equal to
3cycles.
Slave latency: In handshake, the 3 modes have the same
latency for different types of transactions. In credit based,
these 3 modes have different latencies due to the algorithm
used in OCP request Module for each mode. For credit
based, the read response latency is the same for the 3
methods because they use the same architectural structure
(response data flow).
Credit based allows the Slave NA to receiving in each clock
edge a new flit while handshake requires more than two 6. Conclusion
cycles. Therefore, the transactions latency of credit based are
almost half of handshake. In this paper, we describe a new network interface
We have measured the request latency from a request is architecture that supports in IP/NA. A subset of OCP signal
issued by the Master IP OCP on the Master NA’s CI to it is which offers high-level services at a low cost interface are
received by the Slave IP OCP on the Slave NA’s CI. The used. The MNA and SNA are conforms to the OCP
minimal Request latency can therefore be described as: specification and provides read/write-style transactions into
Request latency =L MNA + L SNA + Preq-2 a distributed shared address space. In the presented
In this equation: prototype implementation the following OCP-transactions
• L MNA is the latency of read or writes request of are supported: single reads and writes, burst precise,
Mastrer NA measured in table 6 in request data imprecise, SRMD reads and writes. Six implementations
flow. were done using a handshake and credit based control flow.
• L SNA is the latency of read or writes request of The result shows that the area of credit based NA
Slave NA measured in table 7 in request data flow. implementation is smaller than handshake NA
• Preq is the size of request packet. implementation. The maximum operating frequency
We have measured the response latency from the time obtained for these two NA implementations is about 520
response is issued by the Slave IP OCP in the CI of the MHz.
Slave NA to the time Master IP OCP receives the response The speed of credit based NA master is lower than the
in the CI of the Master NA. The procedure is similar to the handshake one. The latency and jitter for credit based is
request latency measurements. lower than handshake implementation. This result can help
The minimal Request latency can therefore be described as: the NoC designer in choosing of control mode of flow which
Response latency=L SNA +L MNA +Pres-1 corresponds to the constraints of its application in terms of
area, speed, latency and jitter. These NAs can be reused by
design NoC that uses the same configuration.
(IJCNS) International Journal of Computer and Network Security, 109
Vol. 1, No. 2, November 2009

switched, on-chip micro-network,” in Proc. Design


References Automation Test Eur., 2003.
[1] Luca Benini and Giovanni De Micheli, “Network on [17] P. Guerrier and A. Greiner, “A generic architecture for
Chips: A New SoC Paradigm”, IEEE Computer, Jan. on-chip packet switched interconnections,” in Proc.
2002, pages 70-78. DATE, 2000.
[2] Shashi Kumar et. al., “A Network on Chip Architecture [18] J. Liang, S. Swaminathan, and R. Tessier, “aSOC: A
and Design Methodology”, Proc. OfIEEE Annual scalable, single chip communications architecture,” in
Symposium on VLSI, 2002, Pittsburgh, USA, pp. 117- Proc. PACT, 2000.
124. [19] Gilabert, F. and al “Exploring High-Dimensional
[3] M. T. Rose. The Open Book: A Practical Perspective on Topologies for NoC Design Through an Integrated
OSI. Prentice Hall, 1990. Analysis and Synthesis Framework”, PP 107-116 in:
[4] K. Keutzer et al. System-level design: Orthogonalization Networks-on-Chip, 2008. NoCS 2008. Second
of concerns and platform-based design. IEEE Trans. on ACM/IEEE International Symposium .
CAD of Integrated Circuits and Systems, 19(12):1523– [20] M. Mirza-Aghatabar and al “An Empirical
1543, 2000. Investigation of Mesh and Torus NoC Topologies
[5] M. Sgroi et al. Addressing the system-on-a-chip Under Different Routing Algorithms and Traffic
interconnect woes through communication-based Models”,PP 19-26 in Proceedings of the 10th
design. In Proc. DAC, 2001. Euromicro Conference on Digital System Design
[6 ]S. Yoo, G. Nicolescu, D. Lyonnard, A. Baghdadi, and A. Architectures, Methods and Tools
Jenya, “A Generic Wrapper Architecture for Multi- 2007.
Processor SoC Cosimulation and Design,”
Int.Symposium on HW/SW Codesign (CODES), Authors Profile
Copenhagen, Denmark, April 2001, pp. 195-200.
[7] Virtual Socket Interface Alliance. Virtual component
interface standard draft specification, v. 2.2.0. Brahim attia was born in Sousse,
http://www.vsia.com (document access may be limited Tunisia on October 09 1979. He
received the Master degree in Electrical
to members only); August 1997.
Engineering from Faculty of Sciences of
[8] “Open Core Protocol Specification, Release 2.0,” Tunisia,Tunisia in 2007. Currently he is
www.ocpip.org, OCP-IP Association, 2003. engaged in research for his Ph.D. degree
[9] ARM, “AMBA AXI Protocol Specification, version 1.0,” in the EµE laboratory with the physics
www.arm.com, ARM, March 2004. department in Faculty of Sciences of
[10] S. Felperin, P. Raghavan, E. Upfal, “ A theory of Monastir. His researches interests
wormhole Routing”, Proceeding IEEE Transaction on include syntheses of interfaces for NOC and Design on chip
Computer, June 1996, Vol. 45, no. 6, pp. 704-713. communication infrastructures for hierarchical networks.
[11] A. R˘adulescu and K. Goossens, “Communication
services for networks on chip,” in Domain-Specific
Abdelkrim Zitouni was born in Gabe`s,
Processors: Systems, Architectures, Modeling, and Tunisia on October 06 1970. He received
Simulation, S. Bhattacharyya, E. Deprettere, and J. the D.E.A, the Ph.D. and the HDR degree
Teich, Eds. New York: Marcel Dekker, 2004, ch. 10, in Physics (Electronics option) from
pp. 193–213. Faculty of Sciences of Monastir, Tunisia
[12] A. Adriahantenaina, H. Charlery, A. Greiner, L. in 1996, 2001 and 2009 respectively.
Mortiez, Zeferino,SPIN: A scalable, packet switched, Since 2002 he has been recruited as
on-chip micro-network,in Proc. Design Automation Assisting Master in Electronics and
Test Eur., 2003. Microelectronics with the physics
department in Faculty of Sciences of Monastir. His researches
[13] A. Radulescu, J. Dielissen, K. Goossens, E. Rijpkema,
interest, communication synthesis for SoC and asynchronous
and P. Wielage, “An efficient on-chip network interface system design.
offering guaranteed services, shared-memory
abstraction, and flexible network configuration,” in Rached Tourki was born in Tunis, on
Proceedings of the 2004 Design, Automation and Test May 13 1948. He received the B.S. degree
in Europe Conference (DATE’04). IEEE, 2004. in Physics(Electronics option) from Tunis
[14] M. Dall’Osso et. al, “xpipes: a Latency Insensitive University, in 1970; the M.S. and the
Parameterized Network-on-chip Architecture For Multi- Ph.D. in Electronics from Orsay
Processor SoCs”, pp. 536-539, ICCD 2003. Electronic Institute, Paris-south University
in 1971and 1973 respectively. From 1973
[15] T. Bjerregaard, S. Mahadevan, R. G. Olsen, and J.
to 1974 he served as Microelectronics
Sparso.An OCP compliant network adapter for GALS- Engineer in Thomson- CSF. He received
based SoC design using the MANGO network-on-chip. the the Doctorat d’etat in Physics from
In Proceedings of International Symposium on System- Nice University in 1979. Since this date he has been Professor in
on-Chip 2005. IEEE, 2005. Microelectronics and Microprocessors with the physics department
[16] A. Adriahantenaina, H. Charlery, A. Greiner, L. in Faculty des of Sciences of Monastir. His researches interest,
Mortiez, and C. A. Zeferino,“SPIN: A scalable, packet digital signal processing and hardware–software codesign for rapid
prototyping in telecommunications.
110 (IJCNS) International Journal of Computer and Network Security,
Vol. 1, No. 2, November 2009

3D Face Recognition using Combination Rule for


Horizontal and Vertical Face Contour
Wei Jen Chew1, Kah Phooi Seng1 and Li-Minn Ang1
1
The University of Nottingham, Faculty of Engineering,
Jalan Broga, 43500 Semenyih, Selangor Darul Ehsan, Malaysia
Wei-Jen.Chew@nottingham.edu.my, Jasmine.Seng@nottingham.edu.my, Kenneth.Ang@nottingham.edu.my

pose, unlike the static 2D images.


Abstract: In this paper, the effect of using either horizontal or
vertical face contour for 3D face recognition was investigated. A common method for 3D face recognition is to perform
First, the face contours were extracted out. Then, 2 different surface matching. First, 2 faces are aligned together using
methods, which are the contour distance method and a proposed Iterative Closest Point (ICP) [4], which basically rotates and
contour angle method were performed on the face contours. A translates the two faces until a best possible fit is obtained.
combination rule was also proposed to help combine the Then, the surface distance difference between the 2 faces is
matching values obtained. Simulation results show that the
calculated. The face with the shortest distance would be the
proposed angle method is the better option. Besides that, by
combining both horizontal and vertical contours, a better
match. However, this could be a time consuming method
recognition rate can be achieved. This proves that the since the whole face surface contains many data points.
combination rule proposed is a feasible method. The results in Besides that, using ICP to align every pair of faces together
this paper show that good recognition rate can be obtained can also take up time as well as be inefficient.
without the use of complex calculations. Hence, another option is to use either the face contour
Keywords: Face contours, Combination rule, Distance lines or the profile lines to perform face recognition. The
method, Angle method. contour lines can be obtained horizontally or vertically from
the face while the profile lines are based on the outer shape
1. Introduction of the face. The advantage of using contour or profile lines
for recognition is that they contain less data points.
Face recognition is an area of research that has been
Therefore, processing time would be faster.
explored for many years. It is one of the popular biometric
In this paper, the effectiveness of using horizontal and
systems that can be used for security purposes. An automatic
vertical face contours for recognition purposes is
face recognition system is able to identify unknown people
investigated. An angle method is proposed to perform face
in a crowd without the intervention of humans. This makes
matching and this method is compared with the distance
it an important security tool and therefore warrants
method to determine which method is better. Finally, a
extensive research on it to help create a robust automatic
combination rule is proposed to combine recognition rates
face recognition system.
obtained from different methods or contours to obtain an
Earlier works have focused on automatic face recognition
improved recognition rate.
using 2D images [1][2]. This is because only 2D images
Section 2 is about previous works using face contours
were easily available at that time. Although promising
while Section 3 will discuss about the contour extraction
results have been obtained, it was observed that 2D images
method. Section 4 will discuss about the matching methods
still suffer from 2 main problems, which are pose and
used, Section 5 discusses about the proposed combination
illumination changes [3].
rule, Section 6 is about the different combination of methods
Therefore, focus then shifted to using 3D images for
investigated, Section 7 is the results and discussion section
recognition purposes. This is only feasible in the last few
while finally Section 8 is the conclusion.
years since improvement in technology has finally made it
possible for 3D images to be easily captured. The advantage
in using 3D images is that they are not affected by pose and 2. Background
illumination changes [3]. This is because 3D images rely on The contours of a face can be obtained horizontally or
the depth values, which is the distance of the face from the vertically. A vertical face contour is obtained by slicing an
camera, for recognition purposes, thus avoiding the upright face vertically level by level. These iso-contours are
illumination problem. This differs from 2D images that use then compared with each other to determine the identity of
the intensity value that change under different lighting. the person. Chua et al. [5] proposed a point signature
Besides that, 3D images have the advantage of being able to method to obtain facial curves. Firstly, a point on the face,
be rotated and translated, hence solving the pose change usually the nose tip, is chosen. Next, a sphere of a
problem since they can be manipulated into any required predetermined radius is then created. Aligning the sphere
(IJCNS) International Journal of Computer and Network Security, 111
Vol. 1, No. 2, November 2009

centre and the nose tip, the shape that is created from the obtained, as shown in Figure 1 for vertical contours and
intersection between the sphere and the face would be the Figure 2 for horizontal contours. However, this outline
contour wanted. Iso-geodesic curve is another type of curve actually consists of a set of data points in a matrix and is not
that can be obtained from a 3D face [6]. For this method, the arranged in an orderly fashion. This means that the first
curve is obtained by selecting all points that have the same data point could be representing a point on the lower face
geodesic distance from the nose tip. Sometimes, a while the second data point could be representing a data
combination of different curves is used to perform point at the top of the face. Therefore, the next step is to
recognition. Jahanbin et al. [7] used iso-geodesic and iso- arrange the data obtained to enable it to be easily used.
depth curves to perform recognition while Mpiperis et al. [8]
3.2 Contour Data Selection
used point signatures and iso-contours to look for a match.
Generally, matching is usually achieved by calculating the The matrix containing the contour data points consists of
contour distance difference. many data points but not all of them are needed for face
Besides using vertical face contours, horizontal face matching purposes. In this paper, it is proposed that only
contours can also be used to differentiate faces. Lee et al. half the vertical face contour is required for the matching
[9], Wu et al. [10], Ebers et al. [11] and Li et al. [12] used algorithm. This is because the human face is usually quite
different combinations of horizontal and vertical facial symmetrical and hence using only one side should not make
profiles to determine the identity of various faces. The much difference in the matching process. By only selecting
vertical profiles used are not similar to the vertical contours half the face, the proposed method is able to reduce the
used in the earlier discussed methods. Instead, they are the amount of data by half. For the horizontal face contour, only
vertical face profile through the nose tip. Since it is easier to the front half of the face contour is used.
slice the face horizontally and vertically level by level, it
was decided that this paper would propose a method that is
able to determine the identity of an unknown face by only
using their horizontal and vertical face contours.

3. Face Contour Extraction


In this paper, it is proposed that face matching is achieved
by using the contours of the face. These contours can either Figure 1. Example of vertical face contours
be obtained horizontally or vertically. This is unique only to
3D images since they have the depth value. Basically, the
contour map obtained is similar to those found in a world
terrain map. To extract these contours, there are two main
steps. The first step is to determine which face level the
contour will be extracted from while the second step is to
convert the contour extracted into useful data. The methods
used are further explained in the following sub sections.
3.1 Contour Level Extraction
Selecting a suitable contour is important to obtain a good Figure 2. Example of horizontal face contours
recognition rate. For the vertical type, contours near the
nose tip or near the ears may not be suitable since they do Since the data in the vertical contour matrix is not
not seem to have much differentiation from one face to arranged in an orderly manner, to obtain only half of the
another. Hence, in this paper, it was decided that the vertical face contour points, a threshold needs to be set. Only data
contours would be extracted out from 5 levels, which are 20, that is over a certain value would be selected. Hence, while
25, 30, 35 and 40 levels below the nose tip. As for the aligning the face earlier, the nose tip should be aligned to
horizontal type, 50 levels above and below the nose tip were position (0,0,0). Therefore, when choosing data points from
used, with an interval of 10 between each level. This is the vertical contour matrix, only data points with x values
because the face around this area contains many shape more than zero would be chosen.
changes horizontally. Next, the group of the face contour points is sorted out so
Before extracting the contours, the first step to perform is that they are in an order from one end of the face to the
to align the 3D faces in the same direction. This is to ensure other end. This is achieved using the Equation (1), (2), (3),
that consistent face contour slices can be obtained from each (4) and (5).
face. Alignment is performed by locating the nose tip, y1 = min(Cy > Yi)
determining the face angle and then rotating the face to a (1)
standard angle [13]. For this paper, all the faces are set to be y2 = max(Cy < Yi)
frontal facing before the next step of contour extraction is (2)
performed.
At each level, a contour outline of the face can be
112 (IJCNS) International Journal of Computer and Network Security,
Vol. 1, No. 2, November 2009

y1 − y2 mean distance to be larger.


mc =
x1 − x2 4.2 Proposed Angle Matching Method
(3) The face contours extracted for this paper have a difference
Cc = y1 – mcx1 of either 5 or 10 levels between them since it is not practical
(4) to obtain the face contours for every level of the face. This is
Yi − Cc also to help reduce the amount of data points used for
xc = recognition purposes. However, it is believed that the
mc
relationship between levels can help in the recognition
(5)
process. Hence, it is proposed that the angle between two
where Cy is the y values of a face contour and Yi is the y
different levels be calculated to help determine a face match.
level investigated.
The vertical contour data points are spread out as shown This method consist of finding the angle of two subsequent
in Figure 1 while the horizontal contour data points are level of the probe and each database image, and then
arranged similar to the contours in Figure 2 when they are calculate the angle difference between both the probe and
flipped 900. To create the standard contour matrix, the first database image. The angle is calculated using Equation (7).
step is to determine the contour x value for a fixed set of y L
values. It is not possible to extract these x values out from θ = arctan( )
D
the original contour matrix since there might not be an x
(7)
value that have the exact y value. Hence, at every
predetermined y value, an estimate of the x value is where θ is the angle between 2 levels, L is the height
obtained. difference between 2 levels and D is the horizontal distance
Firstly, the closest y values from the contour matrix above between the data points of the 2 different levels as shown in
and below a predetermined Yi value is obtained using Eq. Figure 3.
(1) and Eq. (2). Next, the gradient, mc, and Cc value of the Similar levels need to be compared for each probe and
line joining y1 and y2 is determined using Eq. (3) and Eq. database face, and similar faces should have similar angles
(4). Finally, the corresponding xc value for the compared to different faces. Hence, similar faces will have
predetermined Yi value is obtained using Eq. (5). By using smaller angle difference compared to different faces.
these steps for every face contour, a standard matrix is
obtained. This enables further processing of the contour
values to be performed more efficiently.

4. Contour Matching Method


Once the standard matrix for the face contours is obtained,
the next step is to determine their match. In this paper, 2
different matching algorithms are used. The first algorithm
is a distance matching method while the second algorithm is
the proposed angle matching method. These 2 algorithms Figure 3. Angle calculation between 2 different levels
are further discussed below.
4.1 Distance Matching Method
5. Proposed Combination Rule
A common method used to determine a match between Although it is proposed either distance or angle is used on
contours is to calculate the distance difference between either the horizontal or vertical contour for recognition
them. This is because similar face would have similar purposes, it is also decided that a combination of the results
contours while different people would have different face. be used to further improve the recognition rate. This is
Therefore, a mean distance is calculated to determine the because sometimes certain method might work better for one
face differences using the equation shown in Equation (6) face compared to another while for a different face, the
[14]. reverse might happen. Therefore, to use the different
1 N methods to complement each other, a combination of the
µ=
N
∑d
i =1
i results should be used. However, to combine the values
obtained from the distance and angle method would not be
(6) practical since both methods calculate different things.
where µ is the mean distance, d is the contour distance and Hence, a combination rule is proposed in this paper. First,
N is the number of distances used. values of all the database faces are sorted and converted to a
Database faces that have smaller mean distance with the normalized score. If there are 80 faces in the database, then
probe face have a higher possibility to be a match. This is the score will span from 1 to 80. Next, the score for 2
because if both contours have similar shape, the distance at different methods are compared and the new score for that
every part of the contour should be small, hence a small database face would be the minimum score of both methods.
mean distance is obtained. However, if the faces are not This is because it is more difficult to accidentally obtain a
similar, the face contours would be different, causing the
(IJCNS) International Journal of Computer and Network Security, 113
Vol. 1, No. 2, November 2009

close match compared to obtaining a mismatch, which can and ending at 50 levels below the nose tip. There is a 10
easily occur due to problems like misalignment. Therefore, levels interval between each level. For the vertical contours,
the minimum score is chosen in the combination rule. 5 levels, starting from 20 levels below the nose tip and
However, this may cause some database face to have the ending at 40 levels below the nose tip, were used. For this
same score. Therefore, the next step is to set up a rule to type of contour, there was an interval of 5 levels between
solve this problem. each level. The skip in levels is to help reduce the amount of
The rule proposed states that if there are similar data points used to represent the face. The combinations
combination score between 2 database faces, then the investigated includes combining 2 methods for a type of
original score from the 2 methods combined will be contour and combining 2 contour types for a single method.
examined and the database face that has the smaller score
difference, and this difference is more than or equals to 10, 7. Results and Discussions
will have its combination score increased by 1 while the
In this paper, the simulation was performed on a subset of
database face with the bigger score difference will still have
the UND 3D face database [15][16].It was determined that
the original combination score. However, if the database
the database set created would consists of 3 faces per person.
face with the smaller score difference has a difference of less
Therefore, a person chosen as a probe should have 4
than 10, then the combination score of this database face
different images in the UND database. Hence, the subset
will remain the same while the database with the bigger
chosen consists of 61 probe people and 80 database people,
score difference will have a new combination score which is
each having 3 face images, making it a total of 240 database
increased by 1 from the original combination score. This is
images.
because if both methods give almost similar scores, then the
Firstly, the face contours, either vertical or horizontal
combination score of the database face is correct and the
depending of which method is being investigated, were
score for the other database face is changed. However, if the
extracted for each probe and training face. Then, the
2 methods produce big difference in score for both database
contour information for each face was selected and sorted
images, then the combination score for the bigger difference
out into a matrix. This data selection step was performed to
would be assumed to be correct. This is because the method
ensure that each face have a matrix of equivalent size which
that produced the higher score does not work for this
contains corresponding x values at similar y values location.
particular database face, hence following the score for the
Next, the horizontal distance or angle at each y value in the
other method would be correct.
matrix was calculated between each probe and the database
Since one of the database face has a combination score
training set. The average distances or angle differences were
increased by 1, hence the rest of the faces in the database
then calculated to determine which face in the database is
will have a combination score increase by 1 to ensure that
most similar to the probe image. Since there are 3 images in
the sorting sequence of all the database faces are still
the database set per person, hence only the lowest average
similar. Finally, once all of the combination scores are
distance value or the smallest angle difference value would
determined to have no repetition, the scores are re-
be considered for each database person. The Rank 1
sequenced again to ensure that the combination score spans
recognition rate results obtained are shown in Table 2.
from 1 to 80 again since there might have been some skips
due to the addition of 1 in the earlier step.
Table 2: Rank 1 recognition rate using 4 different methods
Rank 1
6. Investigated Methods Recognition Rate
In this paper, 8 different methods are investigated to Horizontal Distance Method 70%
determine which contour type, which method as well as Vertical Distance Method 59%
which combination is able to produce the best results for Horizontal Angle Method 77%
face matching. These methods are summarized in Table 1. Vertical Angle Method 75%

Table 1: Investigated Methods


From Table 2, it is observed that the horizontal contours
Methods Horizontal Vertical Distance Angle produces better recognition rates compared to vertical
Contour Contour Method Method contours. Therefore, this indicates that the horizontal
1 √ √ contour shapes are more unique and able to be differentiated
2 √ √ from one another better compared to vertical shapes.
3 √ √ However, using the angle method, it is observed that
4 √ √ horizontal and vertical contours produces almost similar
5 √ √ √ results, indicating this method is suitable for both types of
6 √ √ √ contours. Besides that, it is also observed that the angle
7 √ √ √ method produces better recognition rates when compared to
8 √ √ √ the distance method. Therefore, this shows that the angle is
able to differentiate faces better.
In this paper, the horizontal face contours are extracted From the breakdown of the results obtained, it is observed
out from each face starting at 50 levels above the nose tip that some methods work better for some faces while other
114 (IJCNS) International Journal of Computer and Network Security,
Vol. 1, No. 2, November 2009

methods work better for some other faces. Hence, a References


combination rule was proposed, as explained in Section 5,
and the results obtained are shown in Table 3. [1] M. Turk and A. Pentland. Eigenfaces for recognition.
Journal of Cognitive Neuroscience, 3(1):71-86, Mar.
1991.
[2] P. N. Belhumeur, J. P. Hespanha, and D. J. Kriegman.
Eigenfaces vs. Fisherfaces: Recognition using class
Table 3: Rank 1 recognition rate using the proposed specific linear projection. IEEE Trans. Pattern
combination rule Analysis and Machine Intelligence, 19(7):711-729,
Rank 1 Jul. 1997.
Recognition Rate [3] Kevin W. Bowyer , Kyong Chang , Patrick Flynn, “A
Horizontal + Vertical Distance Method 70%
survey of approaches and challenges in 3D and multi-
Horizontal + Vertical Angle Method modal 3D + 2D face recognition”, Computer Vision
88%
and Image Understanding, v.101 n.1, p.1-15, January
Horizontal Distance + Angle Method 77%
2006
Vertical Distance + Angle Method 69%
[4] P. Besl and N. McKay. A method for registration of 3-
D shapes. IEEE Trans. Pattern Analysis and Machine
By comparing Table 2 and Table 3, it is observed that Intelligence, 14(2):239-256, 1992.
recognition rates improved when using a combination of [5] C.S. Chua and R. Jarvis, Point signatures: a new
horizontal and vertical contours for both methods. However, representation for 3d object recognition. International
combining the 2 different methods on the same type of Journal of Computer Vision 25 1 (1997), pp. 63–85.
contour does not seem to give much improvement. [6] S. Berretti, A. Del Bimbo and P. Pala, 3D face
Therefore, this proves that vertical and horizontal contours recognition using iso-geodesic surfaces, IRCDL 2007
are able to complement each other to obtain better (2007), pp. 111–116
recognition rates. [7] Jahanbin, S., Hyohoon Choi, Yang Liu, Bovik, A.C.,
From the simulations performed, it is observed that the “Three Dimensional Face Recognition Using Iso-
angle method surpasses the distance method and that the use Geodesic and Iso-Depth Curves”, 2nd IEEE
of both horizontal and vertical contour gives better International Conference on Biometrics: Theory,
recognition rates. Therefore, in this paper, the best Applications and Systems, 2008. BTAS 2008, pp. 1-
recognition rate was obtained by combining the recognition 6.
rates obtained from the horizontal and vertical contours [8] Iordanis Mpiperis , Sotiris Malasiotis , Michael G.
using the angle method. At 88% for Rank 1, this recognition Strintzis, 3D face recognition by point signatures and
rate is very high considering no training or complex iso-contours, Proceedings of the Fourth conference on
methods were used. Only simple distance and angle IASTED International Conference: Signal Processing,
measures were used to match the faces together. Pattern Recognition, and Applications, p.328-332,
February 14-16, 2007, Innsbruck, Austria
8. Conclusion [9] Y. Lee, H. Song, U. Yang, H. Shin, and K. Sohn. Local
feature based 3d face recognition. In Audio- and
In this paper, the use of horizontal and vertical face contours Video-based Biometric Person Authentication, 2005
for face recognition was investigated. Matching was International Conference on, LNCS, volume 3546,
performed using either a distance method or a proposed pages 909–918, 2005.
angle method. Besides that, a proposed combination rule [10] Yijun Wu, Gang Pan, Zhaohui Wu, "Face
was used to combine the results obtained from 2 different Authentication based on Multiple Profiles Extracted
methods to produce higher recognition rates. Simulation from Range Data", in Proc. 4th International
results show that horizontal contours are able to produce Conference on Audio- and Video-Based Biometric
better recognition rates compared to vertical contours. Person Authentication (AVBPA'03), Lecture Notes in
Besides that, the proposed angle method is able to produce Computer Science, vol. 2688, pp.515-522, June 9-11,
2003.
higher recognition rates compared to the distance method.
[11] Olga Ebers, Tatjana Ebers, Thea Spiridonidou,
Results also show that the proposed combination rule is able
Matthias Plaue, Philipp Beckmann, Günter Bärwolff,
to help improve the recognition rates. By combining the
Hartmut Schwandt,"Towards robust 3D face
vertical and horizontal contours, better recognition rates are recognition from noisy range images with low
achieved since it was observed that using the vertical resolution", Preprint series of the Institute of
contour works for some faces and fail for other faces while Mathematics, Technische Universität Berlin, Report
the opposite happens when the horizontal contour was used 33-2008
instead. Hence, the best recognition rate was obtained by [12] Chao Li , Armando Barreto, "Profile-Based 3D Face
combining the results obtained from the angle method on Registration and Recognition," Lecture Notes on
the horizontal contour with the results from the angle Computer Science , vol. 3506, pp. 484-494, 2005.
method on the vertical contour. This also proves that good [13] Chew W.J., Seng K.P, Ang L. M, “New 3D Face
recognition rates can be obtained without the use of complex Matching Technique for an Automatic 3D Model
methods. based Face Recognition System”, Journal of Software
(IJCNS) International Journal of Computer and Network Security, 115
Vol. 1, No. 2, November 2009

Engineering, issue Jan-Mar, Vol.3 No.3, pp.24-34,


2009.
[14] Weisstein, Eric W. "Arithmetic Mean." From
MathWorld--A Wolfram Web Resource.
http://mathworld.wolfram.com/ArithmeticMean.html
[15] P. J. Flynn, K. W. Bowyer, and P. J. Phillips,
“Assessment of time dependency in face recognition:
An initial study,” Audio and Video-Based Biometric
Person Authentication, pp.44-51, 2003.
[16] K. Chang, K. W. Bowyer, and P. J. Flynn, “Face
recognition using 2D and 3D facial data,” ACM
Workshop on MultimodalUser Authentication, pp.25-
32, December 2003.

Authors Profile

Wei Jen Chew received her Bachelor of


Engineering degree (with honours) in the field
of Electrical and Computer Systems and
Masters of Electrical and Computer Systems
Engineering from Monash University in 2005
and 2007 respectively. She is currently a
Research Assistant and PhD student at The
University of Nottingham, Malaysia Campus.

Kah Phooi Seng received her PhD and


Bachelor degree (first class honours) from
University of Tasmania, Australia in 2001 and
1997 respectively. Currently, she is a member
of the School of Electrical & Electronic
Engineering at The University of Nottingham
Malaysia Campus. Her research interests are in
the fields of intelligent visual processing,
biometrics and multi-biometrics, artificial intelligence and signal
processing.

Li-Minn Ang completed his Bachelor of


Engineering and Ph.D at Edith Cowan
University in Perth, Australia in 1996 and
2001 respectively. He then taught at
Monash University before joining The
University of Nottingham Malaysia
Campus in 2004. His research interests
are in the fields of signal, image, vision processing and
reconfigurable computing.
116 (IJCNS) International Journal of Computer and Network Security,
Vol. 1, No. 2, November 2009

Lips Detection using Closed Boundary Watershed


and Improved H∞ Lips Tracking System
Siew Wen Chin1, Kah Phooi Seng2, Li-Minn Ang3 and King Hann Lim4
1
The University of Nottingham, School of Electrical and Electronic Engineering,
Jalan Broga, Semenyih, Selangor 43500, Malaysia.
{keyx8csw1, Jasmine.Seng2, Kenneth.Ang3, keyx7khl4}@nottingham.edu.my

red exclusion lips feature extraction. The log scale of the


Abstract: The audio-visual speech authentication (AVSA)
system which offers a user-friendly platform is extensively ratio green over blue colour space is suggested as the
growing for the ownership verification and network security. threshold to extract the lips area and its features.
The front-end Lips detection and tracking is the key to make the From the aforementioned methodologies, it is noticed that
overall AVSA a success. In this paper, the lips detection using most of the proposed system require face detection as the
closed boundary watershed approach and improved H∞ lips pre-requisite procedure [5, 6]. Furthermore, some of the
tracking system is presented. The input image is first segmented
appearance-based lips segmentation approaches [7] do not
into regions using watershed algorithm. The segmented region
is then sent for the lips detection formed by the cubic spline
offer the close-boundary segmentation which might yield the
interpolant lips colour clustering. An improved H∞ tracking loss of some crucial information for further visual speech
system based on the Lyapunov stability theory (LST) is then analysis. In this paper, an automatic lips detection system
designed to predict the lips location of the succeeding image. based on the watershed approach without the preliminary of
The proposed system possesses the advantages of casting off the face localization is proposed. The lips region which
preliminary face localization before the lips detection. possesses the closed-boundary characteristic is directly
Moreover, the image processing time is further reduced by only
segmented from the input image by casting off the face
processing the image within the adjustable small window around
the predicted point instead of keep screening the full size image detection process.
throughout the sequence of images. For the purpose of enhancing the efficiency of the overall
lips detection system, the lips tracking system is adopted
Keywords: Audio-visual speech authentication, lips detection
and tracking, watershed, H∞ filtering, Lyapunov stability theory.
into the system. The coordination of the successfully
detected lips region is passed to the improved H∞ tracking
1. Introduction system to predict the lips location on the succeeding
incoming image. The improved H∞ filtering based on the
With the aggressive growth of computer and LST is designed to give a better tracking ability compared to
communication networks technology, the security of the the conventional Kalman and H∞ filtering. The improved H∞
multimedia data transmitting and retrieving over the open possesses the LST characteristic where the tracking error
networks has drawn an extensive attention[1, 2]. would be asymptotically converge to zero as the time
Multimodal biometric authentication approaches [3], approach to zero, since the LST ensures the tracking system
especially audio-visual speech authentication [4] which is always in the stable condition and has the strong
offers an inconspicuous and user-friendly platform are robustness with respect to the bounded input disturbances
booming as a solution of ownership verification. [8].
Dealing with the audio-visual speech processing, the front After obtaining the predicted location from the
end lips detection and tracking system is a crucial process to aforementioned improved H∞ tracking system, the
make the overall system a success. There were numerous subsequent lips detection process would only be focused
lips detection approaches published in the past [5-7]. Jamal within the small window size image which set around the
et al. [5] proposed the lips detection in the normalized RGB predicted location. The area of the small window is
colour scheme, where the normalized image is first adjustable to suit to the circumstances where the subject is
segmented into skin and non-skin region using the moving forward to the detecting device and the lips size
histogram thresholding. The lips region is then detected would be gradually increased. The increment of the lips
from the skin pixels. Furthermore, the lips region region when the subject is forwarding would cause the
segmentation using multi-variate statistical parameter exiting of the lips out from the fixed window and yield the
estimators by connecting the component analysis and some loss of information for further visual analysis. If the
post-processing is presented by B. Goswami et al. [6]. The prediction from the tracking system is inaccurate and the
face region is first segmented and lower half of the face is lips region is unable to be retrieved, a full size image
extracted to classify skin and not skin regions. The lip processing would restart again. The overview of the
contour is obtained by further applying the connected proposed watershed lips detection and the modified H∞
component analysis and post-processing on the not skin tracking system is illustrated in Figure 1. This paper is
region. Besides, Lewis et al. [7] proposed the pixel-based organized as: Section 2 introduces the proposed watershed
(IJCNS) International Journal of Computer and Network Security, 117
Vol. 1, No. 2, November 2009

lips segmentation while section 3 discusses about the lips window for lips tracking is demonstrated in Section 4. Some
detection and verification process. Subsequently, the simulation results and analysis are shown in Section 5
proposed improved H∞ based on LST and an adjustable following by conclusion in Section 6.

Input the extracted marker and the detected edge (using Sobel
Image
filtering) from the filtered image is first superimposed and
Yes then only sent for the watershed segmentation. The marker-
Watershed
Watershed Segmentation controlled watershed only allows the local minima allocated
Segmentation Is (Within the Small
(Full Size Image) Repetition >5? Window) inside the generated marker which is hence reduces the
redundant catchment basins built from the undesired noise.
No Lips Detection
Lips Detection Yes
No (Small Window
The foreground and background markers are generated by
(Full Size Image)
Is Image) obtaining the regional maxima and minima using the
Lips-Skin Ratio
H∞ Prediction of < Threshold? morphological techniques, known as “closing-by-
Lips Location on No
the succeeding
Is reconstruction” and “opening-by-reconstruction”. The
Lips Region
image
Small Window
Detected? purpose of doing the morphological clean up is to remove
Around the Predicted
Location
the undesired defects and obtain a flat minima and maxima
Yes in each object.
Subsequent For the rain-flow watershed algorithm implemented in
Incoming
Image this section, 8-way connectivity is applied where each pixel
is connected to eight possible neighbours in vertical,
Figure 1. The overview of the lips detection and tracking horizontal and diagonal. Each pixel will points to the
system. minimum value among the eight neighbours and labeled
according to its direction. If none neighbour which holds the
2. Watershed Lips Segmentation lower value but the same as the current pixel values, it
The watershed algorithm is one of the popular image would turned into the regional minimum. Each regional
segmentation tools [9, 10] as it possesses the capability of minimum will form its own catchment basin and all the
closed boundary region segmentation. The watershed pixel would fall into a single minimum according to the
concept is inspired by the topographic studies which splits steepest descending path. The region boundary is formed by
the landscape into the several water catchment areas. the edges which separate the basins. The details of the rain-
Referring to the watershed transform, a grayscale digital flow watershed algorithm could be refer to [11].
image is evaluated as a topographic surface. Each of the After the watershed, region merging is applied to further
pixels is situated at a certain altitude level according to its reduce the over-segmentation by merging the catchment
gray level, where black (the intensity value is 0) corresponds basins which have the similarity in intensity value. If the
to the minimum altitude while white (the intensity value is ratio of the mean colour between two neighbourhood regions
255) on the other hand represents the maximum altitude. is less than the predefined threshold, the respective regions
The other pixels are distributed at a particular level between would be merged and become single region. The process is
these two extremes. repeated until there is no ratio greater than the threshold.
The watershed algorithm used in this paper is based on The process flow of the watershed transformation on the
the rain-flow simulation proposed in [11]. The algorithm digital image is depicted in Figure 2.
applies the falling rain concept where the drops fall from the
higher altitude to the minimum region (known as catchment
basin) following the steepest descent path. After the
watershed, the image is divided into several catchment
basins which created by its own regional minimum. Every
pixel would be labeled to a specific catchment basin number
as the outcome of the watershed transformation.
Although watershed algorithm offers close boundary
segmentation, this approach nevertheless might encounter Figure 2. The overview of the watershed lips
the over-segmentation problem. The total segmentation
regions might increase to thousand though only a small 3. Lips Detection and Verification
number of them are requested. The over-segmentation After attaining the segmented image from the previous
matter is due to the existing noise in the input image and as watershed transformation process, the output is passed to the
well the sensitivity of the watershed algorithm to the lips detection system to obtain the lips region. The
gradient image intensity variations. Dealing with this respective watershed segmented regions are checked with
matter, the input images is first passed to the non-linear the pre-trained lips colour cluster boundary, where only the
filtering for denoising purpose. Median filtering is chosen as region that falls within the boundary would be classified as
it is able to smooth out the observation noise from the image the lips region. The overall lips detection and the
while preserving the region boundaries [12]. Subsequently, verification system are shown in Figure 3.
118 (IJCNS) International Journal of Computer and Network Security,
Vol. 1, No. 2, November 2009

The Face Database (AFD) [13] is used to trained the lips The skin colour cluster boundary is generated by cropping
colour cluster boundary. 6 sets of 6x6 dimensions lips area the 20x20 dimensions skin area from every subject in the
are cropped from every subject in the database, and the total AFD, the cluster after morphological process with the CSI
number of 642 sets data is collected for clustering process. boundary is depicted in Figure 4(b).
The collected data is first converted from RGB into YCbCr
domain. As to avoid the luminance matter, only the 4. Improved H∞ Based on Lyapunov Stability
chrominance components, Cb and Cr are used. The Cb and Theory and an Adjustable Window for Lips
Cr components are plotted onto the Cb-Cr graph. Only the Tracking System
heavily plotted pixels value would be known as the lips
colour cluster, therefore, the final lips colour clustering after Referring to Figure 1, after successfully detecting the lips
the morphological closing process is illustrated in Figure region from the previous section, the current lips
4(a). coordination are passed to the improved H∞ which works as
the predictor to estimate the lips location from the
succeeding incoming image. Subsequently, a small window
is localized at the predicted coordination where the
subsequent watershed lips segmentation and detection
process would only be focused within the small window size
region, rather than keep processing the full size image for
the entire video sequences. The full size image screening
would be going through once again if the lips region is
failed to be detected. With the aid of lips tracking, the image
processing time for the overall lips detection system would
be reduced and it would be hence a credit for the hardware
implementation.
Figure 3. Lips detection and verification system. 4.1 The Adjustable Window
Instead of applying a fixed small window, an adjustable
window is applied in this section. This is due to the reason
that, the fixed window could only deal with the subject who
has a horizontal movement in front of the lips detection
device. The gradually increased of the lips size when the
subject moves towards the device would be failed to be fully
covered by the fixed small window, and it might cause the
failure of the subsequent watershed segmentation and as
Figure 4. (a) lips (b) skin colour clustering with cubic splint well the detection process. The exited lips region might
interpolant boundary. yield the loss of some important information from the
detected lips region for further analysis such as the visual
Subsequently, the generated lips colour cluster is speech recognition process. The drawback of the fixed small
encompassed by using the cubic spline interpolant (CSI) as window is illustrated in Figure 5, it shows the failure of
formulated in (1)-(2) to create the lips colour boundary. The entirely cover the lips region when the subject moving
CSI lips colour boundary is then saved for further lips forward to the detection device.
detection process where the segmented region from the
previous watershed transformation which falls into the
boundary is detected as the lips region.
 T1 ( x) if y1 ≤ y ≤ y2

 T2 ( x) if y 2 ≤ y ≤ y3
T ( x) = 
 M (a) (b) (c)
Tm−1 ( x) if y m−1 ≤ y ≤ y m (1) Figure 5. The problem of fixed small window when the

subject is moving forward as in (b) and (c).
Where Tk is the third degree polynomial defined as:
Tk ( x) = ak ( x − xk ) 3 + bk ( x − xk ) 2 + ck ( x − xk ) + d k (2) 4.2 The Improved H∞ for Lips Tracking System
For k=1, 2, 3…n-1 The improved H∞ filtering [21] based on LST for the lips
If the detected region is more than one region after the tracking system is elaborated as below. A linear, discrete-
aforementioned lips detection process, a further lips time state and measurement equation is denoted as:
verification system would be triggered to gain the final lips
region. The detected region from the watershed State equation : x n +1 = Axn + Bu n + wn
transformation which also fall onto the face region would (3)
only denoted as the final lips region. The face region is Measurement equation: y n = Cx n + v n (4)
detected using the similar methodology as the lips detection.
(IJCNS) International Journal of Computer and Network Security, 119
Vol. 1, No. 2, November 2009

Where x represent the system state while y is the measured 5. Simulation and Analysis
output. A is the transition matrix carrying the state value,
x n from time n to n + 1 while B used to link the input 5.1 Simulation of Watershed Lips Segmentation
vector u to the state variables, and C is the observation The AFD is used to evaluate the performance of the
model that maps the true state space to the observed space; watershed lips segmentation. Figure 6(b) shows the over-
wn and v n are the respective process and measurement segmentation problem with the direct watershed
noise. transformation of the input image as in Figure 6(a).
The state vector for the lips tracking in (3) consists the
centre coordination of the detected lips in the horizontal and
vertical position. A new adaptation gain for the H∞ filtering
is implemented based on the LST. The design concept is
referring to [13]. According to LST, the convergence of the
tracking error e(n) from the newly design adaptation gain
is guaranteed, it would asymptotically converge to zero as Figure 6. (a) Input image (b) Over-segmentation (c)
the time approaching infinity. Superimpose of gradient and marker images (c) Desired
Theorem 4.1: Given a linear parameter vector, H (n) and a segmented image.
desired output, d (n) , the state vector, x(n) is updated as: Figure 6(c) depicts the superimpose of the detected edge and
the markers. The desired segmentation output with a
x( n) = x( n − 1) + g ( n)α (n ) (5)
reduction of the redundant segmented region is as in Figure
6(d).
The adaptation gain of the improved H∞ filtering which has
the characteristic of ∆v < 0 is designed as: 5.2 Simulation of Lips Detection and Verification
g (n) =
H (n)
2
[α (n) − ( I − L(n) ) × e(n − 1)] x(n − 1)
2
The resultant segmented image from the aforementioned
H (n) x(n − 1) (6) watershed transformation is then passed to the lips detection
Where system. The detected lips region is as depicted in Figure 7. If
L(n) = [ I − γQP( n) + H T (n)V −1 H (n) P (n)]−1 (7) there is more than one segmented regions are detected as the
lips region, a further verification process would be triggered
P(n) = FP ( n) L( n) F + W
T (8)
to retrieve the final lips region as shown in Figure 8. Only
γ , Q, W , V are the user defined performance bound, the segmented region which fulfill the criteria of falling
weighting matrices for the estimation error, process and within the lips colour cluster boundary as well as situation
measurement noise respectively. within the face region would be detected as the lips region.

The α (n) , the priori prediction error is defined as:


α ( n) = d (n) − H T x( n − 1) (9)
The tracking error, e(n) is asymptotically converges to zero
as the time, n heading to infinity.

Proof: As to design the tracking system which fulfills the


Figure 7. (a) Input Image (b) Watershed segmented image
LST, the Lyapunov function v (n) is first defined as:
(c) Detected Lips Region.
v ( n ) = e 2 ( n) (10)
Referring to LST, if and only if ∆v (n) = V (n) − V (n − 1) < 0 ,
the selected v (n) would only be denoted as the true
Lyapunov function [13].

The difference between v (n) and v (n − 1) is as following:

(11) Figure 8. (a) Input Image (b) Final detected lips region (c)
Watershed segmented image and lips detection (d) Lips
By substituting the adaptation gain from (6) into (11),
verification with skin colour clustering

5.3Simulation of H∞ Lips Tracking


After successfully retrieving the lips region from the
previous section, the location of the current detected lips
(12)
region is passed to the lips tracking system which is going to
analyze in this section to predict the lips location of the
subsequent incoming image.
As to evaluate the tracking capability of the implemented
120 (IJCNS) International Journal of Computer and Network Security,
Vol. 1, No. 2, November 2009

improved H∞ filtering, some in-house video clips are decomposition," Ieee Transactions on Image
prepared using Canon IXUS-65 camera. The video sequence Processing, vol. 16, pp. 1956-1966, 2007.
is first converted into 256x256 dimensional images. Figure [2] S. Dutta, et al., "Network Security Using Biometric
9 shows some of the tracked lips location from the image and Cryptography," in Advanced Concepts for
sequences. Table 1 shows the average estimation error of the Intelligent Vision Systems, ed, 2008, pp. 38-44.
lips tracking system for the in-house prepared video [3] V. K. Aggithaya, et al., "A multimodal biometric
sequence on every 5th frames and every 10th frames. authentication system based on 2D and 3D palmprint
features," in Biometric Technology for Human
Identification V, Orlando, FL, USA, 2008, pp.
Table 1: Average estimation error
69440C-9.
[4] G. Chetty and M. Wagner, "Robust face-voice based
Average Improved Conventional speaker identity verification using multilevel fusion,"
Estimation H∞ H∞ Image and Vision Computing, vol. 26, pp. 1249-1260,
Error (No. Every Every Every Every 2008.
of pixel) 5th 10th 5th 10th [5] J. A. Dargham and A. Chekima, "Lips Detection in the
Frames Frames Frame Frames Normalised RGB Colour Scheme," in Information and
y-position 2.61 4.87 3.35 6.14 Communication Technologies, 2006. ICTTA '06. 2nd,
x-position 7.08 14.50 9.28 18.29 2006, pp. 1546-1551.
[6] B. Goswami, et al., "Statistical estimators for use in
automatic lip segmentation," in Visual
MediaProduction, 2006. CVMP 2006. 3rd European
Conference on, 2006, pp. 79-86.
[7] T. W. Lewis and D. M. W. Powers, "Audio-visual
speech recognition using red exclusion and neural
networks," Journal of Research and Practice in
Information Technology, vol. 35, pp. 41-64, 2003.
[8] S. Kah Phooi, et al., "Lyapunov-theory-based radial
basis function networks for adaptive filtering," Circuits
and Systems I: Fundamental Theory and Applications,
IEEE Transactions on, vol. 49, pp. 1215-1220, 2002.
[9] J. Cousty, et al., "Watershed Cuts: Minimum Spanning
Figure 9. (a) The first input image (b) watershed Forests and the Drop of Water Principle," Pattern
segmentation on the full size image (c) detected lips region Analysis and Machine Intelligence, IEEE Transactions
on, vol. 31, pp. 1362-1374, 2009.
[10] E. Hodneland, et al., "Four-Color Theorem and Level
6. Conclusion Set Methods for Watershed Segmentation,"
International Journal of Computer Vision, vol. 82, pp.
A lips detection based on the closed boundary watershed and 264-283, 2009.
an improved H∞ lips tracking system is presented in this [11] V. Osma-Ruiz, et al., "An improved watershed
paper. The proposed system enables a direct lips detection algorithm based on efficient computation of shortest
without the preliminary face localization procedure. The paths," Pattern Recognition, vol. 40, pp. 1078-1090,
watershed algorithm which offers the closed-boundary 2007.
segmentation gives better information for further visual [12] N. Gallagher, Jr. and G. Wise, "A theoretical analysis
analysis. Subsequently, the improved H∞ filtering is of the properties of median filters," Acoustics, Speech
implemented to keep track the lips location on the and Signal Processing, IEEE Transactions on, vol. 29,
succeeding incoming video frames. Compared to the pp. 1136-1141, 1981.
conventional H∞, the improved H∞ which fulfills the LST [13] I. M. Lab, "Asian Face Image Database PF01," Pohang
shows a better tracking capability. With the aid of the University of Science and Technology.
tracking system and the adjustable small window, the
overall image processing time could be reduced since only a Authors Profile
small window size image is processed to obtain the lips
region instead of keep processing the full frame image Siew Wen Chin received her MSc and
throughout the entire video sequence. The overall proposed Bachelor degrees from the University of
Nottingham Malaysia Campus in 2008 and
system could be then implemented into the audio-visual
2006 respectively. She is currently pursuing
speech authentication system in the future. her Ph.D at the former campus. Her research
interests are in the fields of image and vision
processing, multi-biometrics and signal
References processing.
[1] N. Bi, et al., "Robust image watermarking based on
multiband wavelets and empirical mode
(IJCNS) International Journal of Computer and Network Security, 121
Vol. 1, No. 2, November 2009
Kah Phooi Seng received her Ph.D and
Bachelor degrees from the University of
Tasmania, Australia in 2001 and 1997
respectively. She is currently an Associate
Professor at the University of Nottingham
Malaysia Campus. Her research interests are
in the fields of intelligent visual processing,
biometrics and multi-biometrics, artificial
intelligence, and signal processing.

Li-Minn Ang received his Ph.D and Bachelor


degrees from Edith Cowan University,
Australia in 2001 and 1996 respectively. He is
currently an Associate Professor at the
University of Nottingham Malaysia Campus.
His research interests are in the fields of
signal, image, vision processing, intelligent
processing techniques, hardware architectures,
and reconfigurable computing.

King Hann Lim received his Master


Engineering from the University of
Nottingham, Malaysia campus in 2007. He is
currently doing his Ph.D at the same
University. He is a member of the Visual
Information Engineering Research Group. His
research interests are in the fields of signal,
image, vision processing, intelligent
processing techniques, and computer vision for intelligent ve
122 (IJCNS) International Journal of Computer and Network Security,
Vol. 1, No. 2, November 2009

Low Memory Strip Based Visual Saliency


Algorithm for Hardware Constrained Environment
Christopher Wing Hong Ngau, Li-Minn Ang, and Kah Phooi Seng
School of Electrical and Electronic Engineering
The University of Nottingham Malaysia Campus
Jalan Broga, 43500 Semenyih, Selangor Darul Ehsan, Malaysia
{keyx8nwh, kezklma, kezkps}@nottingham.edu.my

Abstract: This paper presents a low memory visual saliency objects with similar features. If an object which is alien to
algorithm for implementation in hardware constrained the algorithm is captured by the sensing devices,
environments such as wireless sensor networks (WSNs). While identification leading to detection would be unsatisfactory
visual saliency found importance in various applications, it since the algorithm is not trained to detect objects apart
suffered from heavy memory requirements since low-level from its given prior data. Another disadvantage would be
information from different image scales are required to be the parameter tuning. Parameters governing the
stored for later computations. Therefore, a low memory
performance of the algorithm have to be tuned for
algorithm is required without compromising the performance of
the saliency model. The proposed approach uses a strip-based
applications in different scenarios.
processing method where an image is first partitioned into Visual saliency (VS) can be used when applications
image strips and then the bottom-up visual saliency is applied to dealing with object detection are involved. The main
each of the strips. The strips are recombined to form the final attribute of a VS model is to detect or locate salient objects
saliency map. To further reduce the memory requirement in the in a given scene. Most VS models operate on easily
standard visual saliency algorithm, the Gaussian pyramid is available low-level features such as intensity, colour, and
replaced with a hierarchical wavelet decomposition using the orientation (bottom-up). As used in normal range of
lifting based 5/3 Discrete Wavelet Transform (DWT). applications, they can be applied to the WSN for
Simulation results verified that the proposed approach managed applications which involves object detection. The advantage
to achieve the same output as its non-strip based counterpart of VS over specifically trained object detection algorithms is
while keeping the memory requirements low.
that in VS, how a human perceive objects is considered.
Objects that are important, or that stands out from the
Keywords: low-memory visual saliency model, saliency map,
strip based, hardware constrained environment.
surrounding or even suspicious moving objects are all easily
captured by the human eyes. Therefore, it can be said that
the detection using visual salience is more generic and
1. Introduction natural. Furthermore, parameters in a VS model are usually
With the advancement in the miniaturization of hardware, global and do not require tuning for different scenarios
minute multimedia sensing devices can be developed to unless top-down tasks are involved.
collect valuable information. The wireless sensor network The advancement in technology has made the sensing
(WSN) utilises such devices to constantly harvest devices in WSN to be embedded with processing capabilities
information and transmitting the collected data by means of [6]. Due to space and size restrictions as well as the cost of
wireless transmission. These sensing devices in the WSN, adding additional of memory; the amount of memory
having multimedia and wireless capabilities, can be available on-chip in the sensing devices are limited. The
deployed almost anywhere to effectively provide a large limited amount of memory is seen as a major constraint
coverage in the area of interest. Therefore, the WSN is when dealing with large or high resolution images. Because
considered versatile and found useful in various of the nature of VS algorithms, implementation in a WSN
applications. Initially developed for military applications can be a challenge. Most VS algorithms are dependent on
[1], the WSNs are now made publicly available to low-level features and information of these features is
residential and commercial applications. Since then, WSNs required to be stored before they are processed stage by
have found their way into non-military applications such as stage. A single scale of information can be as large as the
habitat monitoring, environmental monitoring, object image itself. Therefore, the VS models are actually tied
tracking, surveillance, and traffic monitoring [2]–[4]. down by heavy memory requirements.
Besides the main purpose of information gathering, the In this paper, a low memory VS algorithm for
WSN has the capability to detect object in various implementation in hardware constrained environments such
environment. WSNs are particularly useful in detecting as WSNs is proposed. The low memory VS is implemented
enemy vehicles, detecting illegal border crossings, wildlife using a strip based approach where the input image is first
movement tracking, and locating missing person [2] [5]. For partitioned into image strips before each individual strip is
most detection applications, the algorithm is required to be processed. By doing so, the size of the memory buffer used
trained with a large database beforehand. Although many in storing the image during processing can be significantly
algorithms developed for detection purposes have accurate reduced. To further reduce the memory requirements in the
detection capabilities, they do experience two disadvantages. VS algorithm, a hierarchical wavelet decomposition of
Most algorithms are trained to detect a specific object or Mallet [7] is used instead of the standard dyadic Gaussian
(IJCNS) International Journal of Computer and Network Security, 123
Vol. 1, No. 2, November 2009

pyramid. By using the wavelet decomposition method, a features to form the conspicuity maps and finally, the
lower resolution approximation of the image at the previous saliency map. A WTA neural network is used to select the
level can be obtained along with the orientation sub-bands. most salient location in the input image. The selected
From there, orientation features can be used directly from location is the inhibited using inhibition of return and the
the orientation sub-bands instead of having to compute the WTA process is repeated to select the next most salient
using Gabor filters. location. This model is used as the building block for many
The remaining sections of this paper are organised as VS models today.
follows: Section 2 presents a brief development of visual
saliency models along with an overview of the low memory
2.2 Strip Based Processing
VS model using the strip based approach. Section 3
describes the bottom-up VS algorithm used in the low In the low memory strip based approach, the input colour
memory approach. In Section 4, the simulation results of the image of size Y rows × X columns captured by an optical
low memory VS algorithm are presented along with a device is first partitioned into N number of strips of size
discussion on the performance of the approach. Finally, R rows × X columns where R is the minimum number of
Section 5 concludes the paper. rows which is required in a J level DWT decomposition. An
image strip is processed at a time, going through the
2. Low Memory Strip Based Visual Saliency bottom-up VS model. The output of the VS model is a
saliency strip which contains a part of possible salient
2.1 The Development of Visual Saliency Models location in the actual input image. The processed strip is
then added to an empty array of size
Since the mid 1950s, researchers have been trying to
Y rows × X columns according to its actual strip location in
understand the underlying mechanism of visual salience and
input image. The process is repeated until all strips are
attention [8]. A simple framework of how saliency maps are
processed. The recombined strips will be the final saliency
computed in the human brain was developed by Treisman
map where all possible salient objects are encoded. An
and Gelade (1980) [9]; Koch and Ullman (1985) [10];
overview of the low memory strip based approach is shown
Wolfe (1994) [11]; and Itti and Koch (2000) [12] over the
in Figure 1.
past few decades. In the year 1980, Treisman and Gelade [9]
introduced the idea of the master map of locations which is
the fundamental of the saliency maps used in VS models
until today.
Attention, described by Treisman and Gelade, can be
represented by the spotlight metaphor. In the spotlight
metaphor, our attention moves around our field of vision
like a spotlight beam. Objects which fall within the spotlight
beam are then processed. In this idea of spotlight attention,
attention in humans can be consciously directed of
unconsciously directed.
In the framework of Koch and Ullman [10], the idea of
the saliency map is introduced. The saliency map is a master
map which encodes all the location of salient objects in a
topographic form, similar to the master map in [9]. A
winner-take-all (WTA) is utilized to allow competition
within the maps. The winner will be the most salient Figure 1. Overview of low memory strip based VS approach
location at that moment. The salient point is then extracted
and inhibited using the inhibition of return method In the VS algorithm, the DWT is used to construct the
introduced in the 1985 by Posner et al [13]. image pyramid. There are two approaches to perform the
Wolfe [11] introduced a search model in which the DWT. The first approach uses the convolution based filter
limited resources available are utilized by referring to the bank method [14-16] and the second approach uses the
earlier output. By doing so, the search guide can be more lifting-based filtering method [17]. The lifting based DWT
efficient as the current output is influenced by the previous method is preferred over the conventional convolution based
output; indicating a top-down mechanism. In the model, DWT due to computational efficiency and memory savings
features such as colours and orientation can be process in [18]. In this approach, the reversible Le Gall 5/3 filter is
parallel. The summation of the processed features generates used in the image pyramid construction [18].
an activation map, where the locations of the search objects Although the strip based approach provides a reduction in
are encoded topographically. memory, there are some trade-offs in this method. The first
Recently, Itti and Koch [12] presented a newer trade-off is that if just enough lines are used in a strip for
implementation of the model in [10]. The model provides a the DWT decomposition, horizontal line artifacts will
fast and parallel extraction of low-level visual features such appear in the final output. This is due to insufficient
as intensity, colour, and orientation. The features are information available when the strip is decomposed during
computed using linear filtering (Gaussian filter) and center- the image pyramid construction. To solve this problem,
surround structures. A normalization operator is used to additional overlapping lines are used. Although, there is a
normalize the combined maps according to the three
124 (IJCNS) International Journal of Computer and Network Security,
Vol. 1, No. 2, November 2009

slight increase in the memory required, a better output is With the overlapping of lines, the total number of lines
obtained. required is 32 for a three level decomposition. Equation (1)
The second trade-off is that min-max normalisation used relates the number of lines required with strip overlaps.
in standard VS models is now inaccurate if performed on
individual strips. The actual global minimum and maximum Minimum number of lines = 2 × (2 J +1 )
is not known until the final strip is processed. Normalising
using local minimum and maximum values on the other (1)
hand will provide incorrect value range representation and
the contrast of one strip will be different from another, An illustration of the strip overlapping is shown in Figure 2.
giving rise to distortion. Using a saturation function as The overlapping only occurs at the top and bottom border of
another mean of normalisation will not work as well since the strips. The overlaps at the left and right borders shown
the minimum and the maximum values are required as in in Figure 2 are for the sake of clarity.
the case of min-max normalisation.
One possible solution is to store the minimum and the
maximum values and use them for the next input image. If
the optical device captures the image as a continuously
similar to a video stream, then the next input image (frame)
will not differ much from the previous one. As times goes
on, the quality of the output saliency map will be good since
an accurate estimate of the maximum and minimum values
are available. With this solution, only the output from the
first frame will suffer from distortions where the subsequent
output will have reasonable to good quality.

Figure 2. Illustration of strip overlapping


2.3 Image Pyramid Construction and Minimum
Required Lines
3. Bottom-up Visual Saliency Algorithm
Before the feature maps in any VS model can be computed,
image pyramids have to be constructed for each feature. In This section describes the bottom-up visual saliency
Itti and Koch’s model [12] and other model that adapts the algorithm which is used in the original non-strip and the
works of [12], the dyadic Gaussian pyramid is used as the strip based approach. In this section, the term scales is used
image pyramid. The input image is sub-sampled using a in describing the visual saliency algorithm whereas in the
Gaussian filter and then decimated by a factor of two. The DWT computation, level of decomposition is used although
process is repeated until nine levels of the image pyramid both can be used interchangeably.
are obtained. Image pyramids are constructed for the
intensity, colour, and orientation features. In [12], Gabor 3.1 General Visual Saliency Algorithm
filters are used to create four sets of image pyramid for
The input image is first converted to the YCbCr colour
orientation by convolving with each level of the intensity
space. Then, the lifting based 5/3 filter is applied to each of
pyramid.
the Y, Cb, and Cr channels. This will result in four sub-
In the approach presented in this paper, the wavelet
bands: LL (approximate coefficients, A); HL (vertical detail,
decomposition method is used instead of the dyadic
V); LH (horizontal detail, H); and HH (diagonal detail, D).
Gaussian pyramid as discussed in Section 1. In order to use
The HL, LH, and HH bands for the Cb and Cr channel are
the wavelet decomposition method, the number of scales
discarded whereas these bands for the Y channel are used in
(level of decomposition) is required. The number of lines in
the orientation feature computation. The LL bands for all
the strip is dependent on the number of scales where the
three channels are kept to compute the intensity and colour
number of scales is mainly based on the preference of the
features. The DWT process is repeated another two times to
user. Depending on how intense the user wants the salient
form a three level image pyramid (excluding images at level
object to be highlighted, the value J is varied accordingly. A
0).
higher value of J would require more lines in a strip; hence,
After the wavelet decomposition, there will be three
an increment in the required memory but better object
intensity maps, six colour maps, and nine orientation maps
highlighting. In this paper, a value of J = 3 is used.
at scale J = 1 to 3. All maps at all scales are bilinear
For a three level DWT decomposition, a minimum of 8
interpolated to facilitate point-to-point subtraction. A centre-
lines in a strip is required. However, it is always advised to
surround (CS) operation is applied to each of the maps to
include an additional level of decomposition since a single
form feature maps. The CS process is used to enhance
line at the last decomposition level will not provide a
important regions from their surroundings.
The center is a pixel at scale c ∈ {1,2, K J } and the
satisfactory result later in the center-surround operation. By
adding an additional level, the minimum number of lines
now becomes 16. To compensate for the trade-off discussed surround is a pixel at scale J = c + δ where δ ∈ {c + 1, c + 2} .
in the early part of Section 2, two additional lines have to be The CS for the intensity and colour features are computed as
added at the top and at the bottom of the strip for one level shown in Equations (2) to (4).
of decomposition.
(IJCNS) International Journal of Computer and Network Security, 125
Vol. 1, No. 2, November 2009

I C , S (m, n ) = I C (m, n ) − I S (m, n) in (5):


(2)
OC ,S (m, n ) = YDJ (m, n ) − YHJ (m, n ) + YDJ (m, n ) − YVJ (m, n )
CbC ,S (m, n ) = CbC (m, n ) − CbS (m, n )
+ YVJ (m, n ) − YHJ (m, n )
(3)
(5)
CrC ,S (m, n ) = CrC (m, n ) − CrS (m, n )
(4)

For the orientation feature, the CS computation is described

Figure 3. Saliency maps generated by the non-strip based and strip based approaches

The feature maps are then summed across scales and reduced to 16. What are required for computation are the
normalised using the min-max normalisation to form four middle eight lines. Therefore, the top four and bottom four
conspicuity maps for intensity, colour (Cb and Cr), and lines are discarded during the feature computation.
orientation respectively. The saliency map is finally The algorithm is also modified to allow the minimum and
computed by summing all four conspicuity maps and maximum to update itself to be used for subsequent strips
dividing by four as shown in Equation (6). and image frames. The range allowed would be between 0
and 255. The global minimum is firstly initialized to 255
S (m, n ) = (CM I + CM Cb + CM Cr + CM O ) / 4 while the maximum to 0. At the first strip, the global values
(6) are compared with the local minimum and maximum of the
strip. If the maximum value of strip is higher than the
3.2 Modification for the Strip Based Approach current global maximum, the global value will be updated.
For the strip based approach, the overall VS algorithm is the The similar is true for the minimum value. The process is
same as the non strip approach with the exception of the repeated every time a new strip is processed. Initially, the
image pyramid and the normalisation method. Discussed in first few strips will severely distorted but as the process
Section 2, the strip based approach would suffer from line continues, the output strip will be properly normalised.
artifacts and would not be able to know the global minimum
and maximum values unless all the strips are processed. 4. Simulation Results and Discussion
To overcome the problem of line artifacts, overlapping is
done at the pyramid construction level. The overlapping part All results are simulated in Matlab. In the low memory
strip-based approach, three level of decomposition are used.
of the strip has to be removed before they can be used for
20 image strips are used with overlapping, resulting in a 32
computation. For example, a non-overlapping strip of 16
line strip. All test images are of size 320× 480.
lines would result in an eight line approximation after a
level of decomposition. With overlapping, the number of
lines is 32. After a level of decomposition, the lines will be
126 (IJCNS) International Journal of Computer and Network Security,
Vol. 1, No. 2, November 2009

4.1 Results the proposed strip based approach performs as well as the
Simulations are performed to verify the performance of the non-strip approach while saving memory resources up to
proposed approach. Comparisons of saliency maps more than 80% depending on image size.
generated by the non-strip based and strip based algorithms
are both shown in Figure 3. In Figure 3, the saliency maps References
for the first image frame and its successive second image
frame are shown. [1] K. Romer and F. Mattern, “The Design Space of
Wireless Sensor Networks”, Wireless
Table 1: Memory savings according to image size Communication, IEEE, Volume 11, Issue 6, pp. 54-
61, December, 2004.
[2] I. F. Akyildiz, T. Melodia, and K. R. Chowdhury, “A
Survey on Wireless Multimedia Sensor Networks”,
Computer Network 51, pp. 921-960, 2007.
[3] N. Xu, “A Survey of Sensor Network Applications”,
University of Southern California, 2003, available on
http://enl.usc.edu/~ningxu/papers/survey.pdf.
[4] A. Mainwaring, J. Polastre, R. Szewezyk, D. Culler,
and J. Anderson, “Wireless Sensor Networks for
4.2 Discussion of Results
Habitat Monitoring”, WSAN’02, Atlanta, Georgia,
By comparing the saliency maps in column 2 and column 4 USA, September, 2002.
of Figure 3, it can be seen that the strip-based approach [5] H-W. Tsai, C.P. Chu, and T.S. Chen, “Mobile Object
provides an identical output as the non-strip based approach. Tracking in Wireless Sensor Networks,” Computer
The fact holds if the optical device provides a continuous Communications, Volume 30, Issue 8, June, 2007.
image stream. With successive image frames, the contents [6] L. W. Chew, L.-M. Ang, and K. P. Seng, “Survey of
in the frame would not change drastically in normal Image Compression Algorithms in Wireless Sensor
conditions; therefore, the min-max updating and Networks”, International Symposium on Information
normalising method will actually provide a rather accurate Technology 2008, Volume 4, pp. 1-9, 2008.
estimation of the required global values. [7] S. Mallat, “A Theory for Multi Resolution Signal
Seen in the third column of Figure 3, the top portion of Decomposition: The Wavelet Representation”, IEEE
the saliency maps are seen to be the most distorted and Transaction on Pattern Analysis and Machine
gradually improves strip moves down towards the bottom of Intelligence, Volume 11, pp. 674-693, 1989.
the image. As more strips are processed, the stored global [8] J. K. Tsotsos, L. Itti, and G. Rees, “A Brief and
minimum and maximum values are updated with a set of Selective History of Attention”, Neurobiology of
more accurate results. Once an image frame is fully Attention, Elsevier Press, 2005.
processed, the stored global values are used for the next [9] A. Treisman and G. Gelade, “A Feature Integration
frame, resulting in an improved performance, near or Theory of Attention”, Cognitive Psychology 12, pp.
identical to the result generated using the non-strip based 97-136, 1980.
approach. If the next frame consists of changes which are [10] C. Koch and S. Ullman, “Shifts in Selective Visual
not present in the prior frame, the stored values could Attention: Towards the Underlying Neural Circuitry”,
provide a good estimate of the min-max value while Human Neurobiology 4, pp. 219-227, 1985.
updating itself as in the previous frame. [11] J. Wolfe, “Guided Search 2.0: A Revised Model of
To investigate the amount of memory saved using the Visual Search”, Psychonomic Bulletin and Review,
approach, consider a memory bank having many memory 1(2), pp. 202-238, 1994.
blocks where a single block holds a single value at [12] L. Itti and C. Koch, “A Saliency-based Search
location (m, n) . For simplicity, let the number of bits in the Mechanism for Overt and Covert Shift of Visual
memory bank be of a certain number B . The actual amount Attention”, Vision Research, Volume 40, pp. 1489-
of bits allocated will not be considered and is assume to be 1506, 2000.
equal in all the memory blocks. [13] M. Posner, I. Rafal, R. D. Choate, L. S., and J.
The values calculated are based on the memory used in Vaughan, “Inhibition of Return: Neural Basis and
storing the images (maps) before they are used for further Function”, Cognitive Neuropsychology, 2(3), pp. 211-
computation at different stages and using the test image of 228, 1985.
size 320× 480. Table 1 shows the amount of memory saved [14] A. Jensen and I. Cour-Harbo, “Ripples in
when the image size is varied for a three level Mathematics: The Discrete Wavelet Transform”,
decomposition. As the image size increases, the pattern of Springer, 2000.
the saving curve tends to level off towards near 100%. [15] M. Weeks, “Digital Signal Processing Using Matlab
and Wavelets”, Infinity Science Press LLC, 2007.
5. Conclusion [16] G. Strang and T. Nguyen, “Wavelets and Filter
Banks”, 2nd Edition, Wellesley-Cambridge, 1996.
A low memory VS algorithm using image strips for [17] W. Sweldens, “The Lifting Scheme: A Custom-
implementation in hardware constrained environments has Design Construction of Biorthogonal Wavelets”,
been proposed in this paper. Simulation results verified that
(IJCNS) International Journal of Computer and Network Security, 127
Vol. 1, No. 2, November 2009

Applied and Computational Harmonic Analysis, No.


2, Volume 3, pp. 186-200, Elsevier, 1996.
[18] T. Archarya and P.-S. Tsai, “JPEG2000 Standards for
Image Compression: Concepts, Algorithms and VLSI
Architectures”, Wiley-Interscience, 2004.

Authors Profile
Christopher Wing Hong Ngau received
his Bachelor degree from the University of
Nottingham Malaysia Campus in 2008. He
is currently pursuing his PhD at the
University of Nottingham Malaysia Campus.
His research interests are in the fields of
image, hardware architectures, vision
processing, and wireless sensor network.

Li-Minn Ang received his PhD and


Bachelor degrees from Edith Cowan
University, Australia in 2001 and 1996
respectively. He is currently an Associate
Professor at the University of Nottingham
Malaysia Campus. His research interests are
in the fields of signal, image, vision
processing, intelligent processing
techniques, hardware architectures, and
reconfigurable computing.

Kah Phooi Seng received her PhD and


Bachelor degrees from the University of
Tasmania, Australia in 2001 and 1997
respectively. She is currently an Associate
Professor at the University of Nottingham
Malaysia Campus. Her research interests are
in the fields of intelligent visual processing,
biometrics and multi-biometrics, artificial
intelligence, and signal processing.
128 (IJCNS) International Journal of Computer and Network Security,
Vol. 1, No. 2, November 2009

Integrated Intrusion Detection using SCT


Selvakani Kandeeban1 and R. S. Rajesh2
1
Professor and Head, Department of Computer Applications,
Francis Xavier Engineering College, Tirunelveli, Tamilnadu, India.
sselvakani@hotmail.com
2
Reader, Department of Computer Science and Engineering, Manonmanium Sundaranar University,
Tirunelveli, Tamilnadu, India
rs_rajesh@yahoo.co.in

intrusions. Because most computer systems are vulnerable to


Abstract: The Attack that target new vulnerabilities are being
created much faster than in the past. For many years attack, intrusion detection (ID) is a rapidly developing field.
companies have relied on stateful firewalls, host based antivirus When unknown types of attacks need to be detected, these
and anti spam solutions to keep their users and resources safe. techniques have the advantage to automatically retrain
But the landscape is quickly changing and the effectiveness of detection models on input data. Existing IDSs [1, 3, and 4]
these traditional single purpose point security devices are no also rely heavily on human analysts to differentiate normal
longer proving adequate. The third International Knowledge and abnormal network connections. This paper describes an
Discovery and Data mining tools competition set (KDD Cup set
integrated genetic algorithm and neural network for
’99) is used to train and test the feasibility of our proposed
model. This paper mainly addresses the issue of identifying improving intrusion detection performance. First the 41
important input features for intrusion detection. This also features in the KDD Cup set is reduced to 8 features for each
addresses the related issue of ranking the importance of input type of attack using mutual information and the rule set is
features which is itself a problem of great interest since created to improve the detection rate for known attacks
elimination of the significance and or useless inputs leads to a using genetic algorithm. Our idea is to achieve high
simplified problem and possibly faster and more accurate detection rate by introducing high level of generality when
detection. The genetic algorithm employs only the eight most
deploying the subset of the most important features of the
relevant features for each attack category for rule generation.
In this paper, it presents an intrusion detection model based on dataset. As this also results in high false positive rate, we
Soft Computing Techniques (SCT) that is by using mutual deploy additional set of rules in order to recheck the
information, genetic algorithm and neural network Radial Basis decision of the rule set for detecting attacks. Then the Radial
Function (RBF). This key idea is to aim at taking advantage of Basis Function network is used to detect unknown intrusions
classification abilities of neural network for unknown attacks [10]. Therefore the integrated model improves the
and genetic algorithm for known attacks.
performance of detecting all intrusions. Our experimental
Keywords: Genetic Algorithm, Information Gain, Knowledge results demonstrate efficiency and accuracy.
synthesis, Radial Basis Function.
2. Related work
1. Introduction
A.H.Mukkamala, G.I. Janoski and Sung [8] use Support
The complexity, as well as the importance, of distributed Vector Machines (SVMs) and Neural Networks to identify
computer systems and information resources is rapidly important features for 1998 DARPA Intrusion Detection
growing. Due to this, computers and computer networks are data. They delete one feature at a time and build SVMs and
often exposed to computer crime. Many modern systems Neural Networks using the remaining 40 features. The
lack properly implemented security services; they contain a importance of this deleted feature depends on training time,
variety of vulnerabilities and, therefore, can be compromised testing time and the accuracy for SVMs or overall accuracy,
easily. As network attacks have increased in number over false positive rate and false negative rate for Neural
the past few years, the efficiency of security systems such as Networks. The same evaluation process is done for each
firewalls have declined. feature. Features are ranked according to their importance.
It is very important that the security mechanisms of a They conclude that SVMs and neural network classifiers
system are designed to prevent unauthorized access to using only important features can achieve better or
system resources and data. However, building a complete comparable performance than classifiers that use all
secure system is impossible and the least that can be done is features.
to detect the intrusion attempts so that action can be taken to The most related work is done by Li and zhang [12],
repair the damage later. Organizations are increasingly where the general problem of GA optimized feature
implementing various systems that monitor IT security selection and extraction is addressed. He applies a GA to
breaches. Intrusion detection systems (IDSs) have gained a optimize the feature weights of a (k-nearest neighbour) kNN
considerable amount of interest within this area. The main classifier and choose optimal subset of features for a
task of IDS is to detect an intrusion and, if necessary or Bayesian classifier and a linear regression classifier. The
possible, to undertake some measures eliminating the
(IJCNS) International Journal of Computer and Network Security, 129
Vol. 1, No. 2, November 2009

optimization framework of their work is based on the and vice versa, therefore the mutual information is the same
wrapper model [12]. as the uncertainty contained in Y (or X) alone, namely the
One IDS tool that uses GAs to detect intrusions, and is entropy of Y (or X: clearly if X and Y are identical they have
available to the public is the Genetic Algorithm as an equal entropy). In a specific sense [2], mutual information
Alternative Tool for security Audit Trails Analysis quantifies the distance between the joint distribution of X
(GASSATA). GASSATA finds among all possible sets of and Y and the product of their marginal distributions.
known attacks, the subset of attacks that are the most likely Decision Independent Correlation is defined as the ratio
to have occurred in a set of audit data [11]. Since there can between the mutual information and the uncertainty of the
be many possible attack types, and finding the optimal feature. DIC is expressed as
subset is very expensive to compute. GAs are used to search I(X j ; X j )
efficiently. The population to be evolved consists of vectors DICX J ( X i, X j ) = , (2)
with a bit set for each attack that is comprised in the data H(X j )
set. Crossover and mutation converge the population to the
most probable attacks. I(X j ; X j )
A second tool that is implemented and undergoing more DICXi ( X i, X j ) = (3)
advanced enhancements is the Network Exploitation H(Xi )
Detection Analyst Assistant (NEDAA). The Applied Correlation Measure is defined as a measure to quantify
Research Laboratories of the University of Texas at Austin the information redundancy between Xi and Yi with respect
has developed the NEDAA [9], which uses different to Y as follows:
machine learning techniques, such as a finite state machine, I (Y ; X i ) + I (Y; X j ) − I (Y ; X i X j )
a decision tree, and a GA, to generate artificial intelligence QY ( X i, X j ) = (4)
(AI) rules for IDS. One network connection and its related H (Y )
behavior can be translated to represent a rule to judge The ranked lists of features is obtained by using a simple
whether or not a real-time connection is considered an forward selection hill climbing search, starting with an
intrusion or not. These rules can be modeled as empty set and evaluating each feature individually and
chromosomes inside the population. The population evolves forcing it to continue to the far side of the search space.
until the evaluation criteria are met. The generated rule set It has been shown that dependency measure or correlation
can be used as knowledge inside the IDS for judging measures qualify the accuracy of decision to predict the
whether the network connection and behaviors are potential value of one variable. However, the symmetrical uncertainty
intrusions. measure is not accurate enough to quantify the dependency
The extraction of knowledge in the form of rules has among features with respect to a given decision. A critical
been successfully explored before on RBF networks using point was neglected that the correlation or redundancy
the (hidden unit Rule Extraction) hREx algorithm [7]. This between features is strongly related with the decision
work inspired the authors to develop knowledge synthesis or variable. The feature subset is obtained as:
knowledge insertion by manipulating the RBF network 1. Generate feature set R from the ranked list of features
parameters but information flow/extraction was in the 2. For each feature for each type of attack, calculate the
opposite direction. mutual information between the feature Xi and the
decision Y, I(Y;Xi)
3. Methodology 3. Updating relevant features set R by comparing the
mutual information I(Yi;Xi)
As indicated in the introduction the basic objective of this if I(Y;Xi)≥ δx then R ← R + { Xi }
work is to determine the contribution of the 41 features in where δx is the threshold which is user defined
KDD 99 intrusion detection datasets to attack detection 4. Create working Set W by copying R
3.1 Feature Selection 5. Set goal Set G = null
6. While e(G) < δ2 do
Formally, the mutual information of two discrete random
If W = null then break
variables X and Y can be defined as:
Choose Xk Є W that subjects to
p ( x, y )
I ( X ;Y ) = ∑ ∑ p ( x , y ) log
y∈Y x∈ X p ( x) p( y)
(1) (i) Mutual information where
I(Y;Xk) ≥ I(Y;Xl) for all l≠k, Xl Є W
where p(x,y) is the joint probability distribution function of (ii) Correlation Measure
X and Y, and p(x) and p(y) are the marginal probability Qy(Xk,Xn) ≤ Qy(Xm,Xn) for all m≠k, Xn Є G
distribution functions of X and Y respectively. W ← W - {Xk}
Intuitively, mutual information measures the information G ← G + {Xk}
about X that is shared by Y: it measures how much knowing End Loop
one of these variables reduces our uncertainty about the 7. Obtain a feature subset from the intersection of all
other. If X and Y are independent, then X contains no the attacks subset
information about Y and vice versa, so their mutual 3.2 GA Rules Formation
information is zero: knowing X does not give any
By analyzing the dataset, rules will be generated in the rule
information about Y (and vice versa). If X and Y are
set. These rules will be in the form of an ‘if then’ format as
identical then all information conveyed by X is shared with
follows.
Y: knowing X provides all necessary information about Y
if {condition} then {act}
130 (IJCNS) International Journal of Computer and Network Security,
Vol. 1, No. 2, November 2009

The condition using this format refers to the attributes in interacting with the RBF based system in some loosely or
the rule set that forms a network connection in the dataset. tightly coupled protocol. However, by converting the fuzzy
The condition will result in a ‘true’ or ‘false’. The attack rules into RBF architecture they can be subjected to further
name will be specified only if the condition is true. analysis by rule extraction and it also avoids hybrid system
The condition in the format above refers to the attributes integration issues.
in the rule set that forms a network connection in the
dataset, which is selected from the feature selection phase.
4. Experiments and Results
Note that the condition will result in a ‘true’ or ‘false’. The
act field in the ‘if-then’ format above will refer to an action We have used an open source machine learning
once the condition is true, such as reporting an alert to the framework WEKA [Waikato Environment for Knowledge
system administrator. For example, a rule in the rule set can Analysis] written at University of Waikato, New zealand.
be defined as follows: The algorithms can either be applied directly to a data set or
if Number of “hot” indicators <= 0.0 and called from our own JAVA code. The input data for weka
Number of connections to the same host as the classifiers is represented in .ARFF [Attribute Relation
connection in the past two seconds <=500.82 and Function Format], consisting of the list of all instances with
% of connections that have “REJ” errors >0.21 the values for each instance separated by commas. As a
and <=0.01 and Number of connections to result of data set training and testing, a confusion matrix
host <=41.2 and >112.3 will be generated showing the number of instances of each
then SMURF class that has been assigned.
In this Genetic Algorithm, each chromosome represents Experiments were conducted to verify the performance of
one learning rule was evaluated. To evaluate a intrusion detection approach based on the above discussion.
chromosome, an appropriately sized network was configured All the experimental data is available from the corrected
for each of the 20 tasks. An individual of each population data set of KDD cup 1999. Important features based on
consisted of genes, where each gene represented a certain correlation Measure and Information gain was identified.
feature and its values represented the value of the feature. There were 21 types of intrusions in the test set but only 7 of
Each GA is trained during 300 generations where in each them were chosen in the framing set. Therefore the selected
generation 100 worst performed individuals are replaced data also challenged the ability to detect the unknown
with the newly generated ones. The same process was intrusions.
repeated with ten epochs and the results were analyzed. The main concern of features reduction is one of false
The second part of the GA is the fitness function. The alarms and missed intrusion detection. In this work, we
fitness function ‘F’ determines whether a rule is ‘good’ i.e. attempted to reduce the feature that may be effectively used
it detects intrusions, or whether the rule is ‘bad’, i.e. it does for intrusion detection without compromising security. We
not detect intrusions. ‘F’ is calculated for each rule. It will have specially focused on statistical techniques to test
depend on the following equation individual significance and mutual significance.
Support = A and B / N In this KDD data set each sample is unique with 34
Confidence = A and B / A numerical features and 7 symbolic features. In the
Fitness = t1 * support + t2 * confidence Table 1: Information Gain Measures
Where
N is the total number of records Attack
Information Gain Feature Type
A stands for the number of network connections matching type

the condition A DOS 1.351 src_bytes


A and B is the number of records that matches the rule. Probe 1.3596 count
T1 and t2 are the thresholds to balance the two terms. U2R 0.652 dst_bytes
When an intrusion occurs, it is notified. When an R2L 1.1599 service
intrusion does not occur, but the response confirms it as an
intrusion, then it is considered as a false alarm. Once in a Preprocessing task, we map symbolic attributes to
while the data set will have to be updated for new numeric valued attributes. Symbolic features like
connections, and hence the rule set will also be updated. protocol_type(3 different symbols – tcp, udp, icmp),
Service(6 different symbols) and flag(11 different symbols)
3.3 Knowledge Synthesis by RBF were mapped to integer values ranging from 1 to N where N
Knowledge synthesis is a technique intended for those is the number of symbols.
situations in which no actual training data is available but TABLE I shows the highest IG value for each attack
some form of domain knowledge is at hand. The experts type. There are nine features with very small information
knowledge is encoded as fuzzy sets and rules which are used gain which contribute very little to intrusion detection. Two
to synthesize new hidden unit parameters for incorporation features do not show any variations in the training set.
into a new or existing network. The fuzzy rules describe a Finally for each type of attack appropriate reduced feature
set of output classes and the possible input values denoting subset was selected. The ranked feature list is shown as in
their characteristics. The objective of converting from fuzzy TABLE II.
In the normalization step, we linearly scale each of these
rules to RBF networks is to have the knowledge in a
features to the range [0.0, 1.0]. Features having smaller
consistent format. It would be possible to have the domain
integer value ranges like duration [0, 58329],
knowledge in the form of a stand-alone fuzzy module,
(IJCNS) International Journal of Computer and Network Security, 131
Vol. 1, No. 2, November 2009

num_compromised [0,255] were scaled linearly to the range additional rule systems for detecting DoS attacks and
[0.0, 1.0]. Two features spanned over a very large integer normal connections is conformed, as the false positive rate
range, namely src_bytes [0,693375640] and dst_bytes [0, has decreased in each of the cases.
5203179] were scaled by logarithmic scaling to the range Table2: Ranked List of Features
[0.0, 20.4] and [0,15.5]. For Boolean features having values
(0 or 1), they were left unchanged.
Attack
It should be noted that the test data is not from the same Ranked List
type
probability distribution as the training data. Moreover the
test data includes novel attack types that have not been DOS 5,23,3,33,35,34,24,36,2,39,4,38,26,25,29,30,6,
appeared in the training data. 12,10,13,40,41,31,37,32,8,7,28,27,9,1,19,18,2
In the second stage, from the reduced feature subset, rules 2,20,21,14,11,17,15,16
are formed using the genetic algorithm from the KDD data Probe 23,29,27,36,4,32,34,40,35,3,30,2,5,41,28,37,3
set and tested on the KDD training set to observe their 3,25,38,26,39,10,9,12,11,6,1,8,7,21,19,20,31,2
performance with respect to detection, false alarm rate and 2,24,15,13,14,18,16,17
missed alarm rate. The only drawback in this is the rules U2R 6,3,13,15,12,14,18,19,16,17,20,4,5,1,2,10,11,7
are biased to training data set. The genetic algorithm in the ,9,8,35,36,32,34,33,40,41,37,39,38,24,25,21,2
proposed design evaluates the rules and discards the bad 3,22,29,31,30,26,28,27
rules while generating more rules to reduce the false alarm R2L 3,34,1,6,5,33,35,36,32,12,23,24,10,2,37,4,38,1
rate and to increase the intrusion detection. The GA thus 3,16,15,14,8,7,11,9,29,30,27,28,40,41,31,39,1
continues to detect intrusions and produce new rules, storing 9,20,17,18,25,26,21,22
the good rules and discard the bad ones.
dst_host_srv_count <= 227
| num_failed_logins <= 0 Ten kinds of network attacks are included in the training
| | rerror_rate <= 0 set namely back, land, Neptune, pod, smurf, teardrop,
| | | num_access_files <= 0 ipsweep, portsweep, buffer overflow and guess passwd.
| | | | protocol_type = tcp Fifteen kinds of network attacks are included in the testing
| | | | | dst_host_same_srv_rate <= 0.11 set namely perl, xlock, mailbomn\b, UDPStrom, saint,
| | | | | | dst_host_serror_rate <= 0.01 xlock, back, land ,Neptune, pod, smurf, teardrop, ipsweep,
| | | | | | dst_host_serror_rate > 0.01: warezmaster portsweep, bufferoverflow and guess-passwd. The test
| | | | | dst_host_same_srv_rate > 0.11 dataset is similar with the training data set. The only
| | | | | | is_host_login <= 0: warezmaster differences are that the test data set includes some unknown
| | | | | | is_host_login > 0: multihop attacks not accruing in the training data set.
| | | | protocol_type = udp: multihop
| | | | protocol_type = icmp: multihop a b c d e f <-- classified as
| | | num_access_files > 0: ftp_write 76 0 1 0 0 0 | a = back
| | rerror_rate > 0: imap 0 7 0 0 0 0 | b = land
| num_failed_logins > 0: guess_passwd 1 0 250 0 0 0 | c = neptune
dst_host_srv_count > 227: guess_passwd 0 0 0 17 0 0 | d = pod
The summary of the results after RBF training is given
as follows:
Correctly Classified Instances 916 99.7821 % Table 3: Performance of the Implemented System
Incorrectly Classified Instances 2 0.2179 % No of
False Positive Rate
Kappa statistic 0.9959 rules Detection Rate in %
in %
Mean absolute error 0.0008
R2 Pro Do U2 Pro
Root mean squared error 0.0269 DoS U2R
L be S R
R2L
be
Relative absolute error 0.4262 % 50 86.7 79.2 81. 86. 0.9 1.1 1.5 1.2
Root relative squared error 9.0025 % 2 1
Total Number of Instances 918 75 81.4 71.3 75. 83. 1.9 2.7 2.9 2.3
4 4
Confusion matrix showing accuracy of the original RBF 100 78.3 67 71 82. 2.3 3.1 3.6 2.7
network compared with synthesized RBF. The numbers 5
represent test cases and those lying on the diagonal have 0 0 0 0 566 0 | e = smurf
been classified accurately, while those off the diagonal have 0 0 0 0 0 0 | f = teardrop
been misclassified. The original network has an accuracy of
51.3% on the high speed data but when it is modified by The time complexity is quite low. It requires m*(n2-n)/2
inserting domain rules that are characteristic of the nature of operations for computing the pairwise feature correlation
high speed data, the accuracy goes up to 95.0 % matrix, where m is the number of instances and n is the
We have performed experiments using 50, 75, and 100 initial number of features. The feature selection requires
best performed rules for detecting attacks. From TABLE III (n2-n)/w operations for a forward selection and a backward
believing that our system out performs the best performed elimination. The hill climbing search is purely exhaustive,
model reported in literature. Moreover, our previous but the use of the stopping criterion makes the probability of
statement of reducing false positive rate when deploying exploring the entire search space small. In particular we
132 (IJCNS) International Journal of Computer and Network Security,
Vol. 1, No. 2, November 2009

have stressed the message that feature selection can help to reduce the overhead in collecting data when used in a real
focus a learning algorithm only on the most relevant network environment. The generated rules from the genetic
features insight a given dataset thus on the main aspects of algorithm DNA encoding are capable of identifying and
the considered problem. classifying attack categories aright. Once rules are extracted
using genetic algorithm, the rule base is then inserted back
100%
into a new network which has similar problem domain has
80% the desired potential. This is similar to the heuristics given
to expert systems. Also like the heuristics the
60% 100 Rules
extracted/inserted rules may be refined, as more. Since the
75 Rules
40%
numbers of features used are minimum, this method not
50 Rules
only improves the detection performance but also trimmed
20% the time required for training.
0%
5. Conclusion
U2R

U2R
R2L

P rob e

R2L

P rob e
D oS

D oS

After extensive study, we have decided to come up with a


Detection Rate False Positive Rate
unique solution, and approached the problem with a new
Figure 1. ROC Curve showing ID Performance data set formatting and optimization technique. A library of
attacks was created. This library was based on the
The linear structure of the rule makes the detection benchmark provided by the MIT Lincoln Lab that was
process efficient in real time processing of the traffic data. optimized by the KDD cups. After the data was carefully
The evaluation of our approach showed that the hybrid formatted and optimized, neural networks were chosen due
method of using discrete and continuous features can to their abilities to learn and classify. However the detection
achieve a better detection rate of network attacks. In order rate is not good for some runs because of the selection of
to increase the detection rate, we select the features that are cross over and mutation points in corresponding operations
appropriate for each type of network attacks. That is also an is random. Trained Neural Networks can make decisions
added advantage. quickly, making it possible to use them in real-time
One attraction of RBF networks is that there is a two detection. The effects of the large number of shared hidden
stage training procedure that is considerable faster than the units within the network are still an open issue.
methods used to train any other NN. Accuracy can be The modification of existing RBF networks using
effectively evaluated by using ROC curves [Receiver heuristic rules has obvious benefits when used in certain
Operating Characteristic], identify how the detection rate situations. The use of knowledge synthesis only makes sense
vary with the false positives rate. The Detection Rate DR when the available data is insufficient to build a reliable
increases as the false alarm does the same. The DR is close classifier. In such a situation it is advantageous to use
to 95% when the false alarm rate is 1.9% to 2%. However heuristic rules to modify an existing RBF network to detect
the false alarm rate is close to 1% the DR is only about 70%. infrequently encountered input vectors that would otherwise
The reason is because the number of rules in the rule base be misclassified. However, care must be taken when
for each run. applying the domain rules.
Figure 2 shows the performance comparison of this Not all issues are resolved. A system of this type requires an
research work with four of the other important IDS. agent to be running on every host that wished to be
protected. If this is not done, then any host without an agent
ROC DETECTION RATE COMPARISON
is still vulnerable to attack. Further enhancements should
120
be made by the rule learning technique by RBF for detecting
100 any unknown attacks. Future work includes development of
Percentage

Proposed Method
80 a detection scheme that is more general and able to handle
Binary Tree
60 LAMSTAR all types’ data as well as numerical.
SOM
40
ART

20
References
0 [1] Andrews and Geva , “Intrusion detection Rules and
Normal DoS Probe U2R R2L
Networks”, Proceedings of the Rule Extraction from
Attack Type
Trained Artificial Neural NetworksWorkshop,
Figure 2. Comparison with other IDS Artificial Intelligence and Simulation of Behaviour,
Brighton UK, 1996.
When compared to other IDS [5, 6] in this approach, an [2] E. Eskin, A. Arnold, M. Prerau, L. Portnoy, S. Stolfo,
efficient algorithm for feature extraction is proposed to “A geometric framework for unsupervised anomaly
remove the irrelevance during the data preparation period. detection: Detecting intrusions in unlabeled data,” in
Experimental results showed that the new decision Applications of Data Mining in Computer Security,
dependent correlation measure can be used to select the near Chapter 4, D. Barbara and S. Jajodia (editors),
optimal feature subset. The smaller number of features will Kluwer, ISBN 1-4020-7054-3,2002
result in a much faster rule generation process and it will
(IJCNS) International Journal of Computer and Network Security, 133
Vol. 1, No. 2, November 2009

[3] Kayacik G., Zincir-Heywood N., and Heywood M. On


the Capability of an SOM based Intrusion Detection Dr. R. S Rajesh received his B.E and M.E
System. In Proceedings of International Joint degrees in Electronics and Communication
Conference on Neural Networks, 2003. Engineering from Madurai Kamaraj
University, Madurai, India in the year 1988
[4] Kazienko, P., And Dorosz, P. Intrusion Detection
and 1989 respectively, and completed his
Systems (IDS) Part I - (network intrusions; attack Ph.D in Computer Science and Engineering
symptoms; IDS tasks; and IDS architecture), from Manonmaniam Sundaranar University in
http://www.windowsecurity.com, Apr 07, 2003. the year 2004.
[5] Kishore and M. V. Rao. A novel based algorithm for In September 1992 he joined in
radial basis function network. In Proceedings of the Manonmaniam Sundaranar University where he is currently
IEEE International Conference on Neural Networks, working as Reader in the Computer Science and Engineering
volume 489, pages 2007–2011, 1997. Department.
[6] Kruegel, C., and Toth, T. Using decision trees to
improve signature-based intrusion detection. In Proc.
Int’l Symp. Recent Advances in Intrusion Detection,
2003.
[7] McGarry, S.Wermter, and J. Mac-Intyre. Knowledge
extraction from local function networks. In
Seventeenth International Joint Conference on
Artificial Intelligence, volume 2, pages 765–770,
Seattle, USA, August 4th-10th 2001.
[8] Mukkamala, G. I. Janoski, and A. H. Sung. Intrusion
detection using support vector machines. In
Proceedings of the High Performance Computing
Symposium - HPC 2002, pages 178–183, San Diego,
CA, USA, April 2002.
[9] Sabhnani M., Serpen G., “Why Machine Learning
Algorithms Fail in Misuse Detection on KDD
Intrusion Detection Data Set”, In Journal of
Intelligent Data Analysis, 2004.
[10] Sinclair, C., Pierce, L., And Matzner; S. An
application of machine learning to Network Intrusion
Detection, http://www.citeseer.nj.nec.com. 2004
[11] Verwoerd T., and Hunt, R. 2001. Intrusion Detection
Techniques and Approaches, http://www.
Elsevier.com,2001.
[12] Z. Zhang, J. Li, C. N. Manikopoulos, J. Jorgenson,
and J. Ucles. HIDE: A hierarchical network intrusion
detection system using statistical preprocessing and
neural network classification. In Proceedings of the
2001 IEEE Workshop Information Assurance and
Security, pages 85–90, 2002.

Authors Profile
Selvakani Kandeeban received the MCA
degree from Manonmanium Sundaranar
University and M.Phil degree from Madurai
Kamaraj University.
Presently she is working as an Professor &
Head, MCA Dept in Francis Xavier
Engineering College, Tirunelveli. Previously
she was with Jaya College of Engineering
and Technology as an Assistant Professor,
MCA Department. She has presented 4 papers in National
Conference and 1 paper in international conference. She has
published 1 paper in National journal and 8 papers in International
Journal. She is currently pursuing her PhD degree in Network
Security.

You might also like