You are on page 1of 38

EFFECTS OF CYBER SECUIRITY

KNOWLEDGE ON ATTACK DETECTION


A Project Report
Submitted By

ABHIPRAY PAUL (U12CS503)

Submitted to the

FACULTY OF COMPUTER SCIENCE ENGINEERING


In partial fulfillment for the requirement
for award of degree the degree Of

BACHELOR OF ENGINEERING
IN
COMPUTER SCIENCE ENGINEERING

Department of Computer Science Engineering


Bharath University
BHARATH INSTITUTE OF HIGHER EDUCATION AND RESRACH
(Declared as Deemed to be University under section 3 of UGC Act, 1956)

CHENNAI 600073

DECLARATION
I hereby declare that the project report entitled EFFECTS OF CYBER SECUIRITY
KNOWLEDGE ON ATTACK DETECTION submitted to Bharath University, Chennai in
partial fulfilment of the requirement for the award of degree of Bachelor of Technology in
Computer Science Engineering, is the record of the original work carried out by me under the
guidance of Prof. Shiva Raman. I further declare that, the results of the work have not been
submitted to any other university or institution for the award of any degree or diploma.

Place: Chennai
Date:

Signature of the student

ABSTRACT
The main aspects of outlined are achieving near-real time mode, event analysis and
prognosis mechanisms, security and impact assessment. Intrusion Detection System (IDS),
which allows us to examine how individuals with or without knowledge in cyber security
detect malicious events and declare an attack based on a sequence of network events. It
indicate that more knowledge in cyber security facilitated the correct detection of malicious
events and decreased the false classification of benign events as malicious. The knowledge of
cyber security helps in the detection of malicious events, situated knowledge regarding a
specific network at hand is needed to make accurate detection decisions.

CHAPTER 1
INTRODUCTION
1.1.1 THE OSI MODEL
A protocol is, simply put, a set of rules for communication.In order to get data over the
network, for instance an E-mail from your computer to some computer at the other end of the
world, lots of different hard- and software needs to work together.
All these pieces of hardware and the different software programs speak different
languages. Imagine your E-mail program: it is able to talk to the computer operating system,
through a specific protocol, but it is not able to talk to the computer hardware. We need a
special program in the operating system that performs this function. In turn, the computer
needs to be able to communicate with the telephone line or other Internet hook-up method.
And behind the scenes, network connection hardware needs to be able to communicate in
order to pass your E-mail from one appliance to the other, all the way to the destination
computer.
All these different types of communication protocols are classified in 7 layers, which
are known as the Open Systems Interconnection Reference Model, the OSI Model for short.
For easy understanding, this model is reduced to a 4-layer protocol description, as described in
the table below:

Layer name

Layer Protocols

Application layer

HTTP, DNS, SMTP, POP, ...

Transport layer

TCP, UDP

Network layer

IP, IPv6

Network access layer

PPP, PPPoE, Ethernet

Table 1.1 The simplified OSI Model


Each layer can only use the functionality of the layer below; each layer can only export
functionality to the layer above. In other words: layers communicate only with adjacent layers.
Let's take the example of your E-mail message again: you enter it through the application

layer. In your computer, it travels down the transport and network layer. Your computer puts it
on the network through the network access layer. That is also the layer that will move the
message around the world. At the destination, the receiving computer will accept the message
through its own network layer, and will display it to the recepient using the transport and
application layer.
1.1.2. SOME POPULAR NETWORKING PROTOCOLS
Linux supports many different networking protocols. We list only the most important:
1.1.2.1. TCP/IP
The Transport Control Protocol and the Internet Protocol are the two most popular ways of
communicating on the Internet. A lot of applications, such as your browser and E-mail
program, are built on top of this protocol suite.
Very simply put, IP provides a solution for sending packets of information from one machine
to another, while TCP ensures that the packets are arranged in streams, so that packets from
different applications don't get mixed up, and that the packets are sent and received in the
correct order.
A good starting point for learning more about TCP and IP is in the following documents:

Man 7 ip: Describes the IPv4 protocol implementation on Linux (version 4 currently
being the most wide-spread edition of the IP protocol).

Man 7 tcp: Implementation of the TCP protocol.

RFC793, RFC1122, RFC2001 for TCP, and RFC791, RFC1122 and RFC1112 for IP.

The Request for Comments documents contain the descriptions of networking standards,
protocols, applications and implementation. These documents are managed by the Internet
Engineering Task Force, an international community concerned with the smooth operation of
the Internet and the evolution and development of the Internet architecture.
1.1.2.2 TCP/IPv6

Nobody expected the Internet to grow as fast as it does. IP proved to have quite some
disadvantages when a really large number of computers is in a network, the most important
being the availability of unique addresses to assign to each machine participating. Thus, IP
version 6 was deviced to meet the needs of today's Internet.
Unfortunately, not all applications and services support IPv6, yet. A migration is
currently being set in motion in many environments that can benefit from an upgrade to IPv6.
For some applications, the old protocol is still used, for applications that have been reworked
the new version is already active. So when checking your network configuration, sometimes it
might be a bit confusing since all kinds of measures can be taken to hide one protocol from the
other so as the two don't mix up connections.
More information can be found in the following documents:

Man 7 ipv6: the Linux IPv6 protocol implementation.

RFC1883 describing the IPv6 protocol.

1.1.2.3. PPP, SLIP, PLIP, PPPOE


The Linux kernel has built-in support for PPP (Point-to-Point-Protocol), SLIP (Serial
Line IP), PLIP (Parallel Line IP) and PPPP Over Ethernet. PPP is the most popular way
individual users access their ISP (Internet Service Provider), although in densely populated
areas it is often being replaced by PPPOE, the protocol used for ADSL (Asymmetric Digital
Subscriber Line) connections.
Most Linux distributions provide easy-to-use tools for setting up an Internet connection. The
only thing you basically need is a username and password to connect to your Internet Service
Provider (ISP), and a telephone number in the case of PPP. These data are entered in the
graphical configuration tool, which will likely also allow for starting and stopping the
connection to your provider.
1.1.2.4. ISDN

The Linux kernel has built-in ISDN capabilities. Isdn4linux controls ISDN PC cards and can
emulate a modem with the Hayes command set ("AT" commands). The possibilities range
from simply using a terminal program to full connection to the Internet.
1.1.2.5. AppleTalk
AppleTalk is the name of Apple's internetworking stack. It allows a peer-to-peer
network model which provides basic functionality such as file and printer sharing. Each
machine can simultaneously act as a client and a server, and the software and hardware
necessary are included with every Apple computer.
Linux provides full AppleTalk networking. Net talk is a kernel-level implementation of
the AppleTalk Protocol Suite, originally for BSD-derived systems. It includes support for
routing AppleTalk, serving UNIX and AFS file systems using AppleShare and serving UNIX
printers and accessing AppleTalk printers.
1.1.2.6. SMB/NMB
For compatibility with MS Windows environments, the Samba suite, including support
for the NMB and SMB protocols, can be installed on any UNIX-like system. The Server
Message Block protocol (also called Session Message Block, NetBIOS or LanManager
protocol) is used on MS Windows 3.11, NT, 95/98, 2K and XP to share disks and printers.
The basic functions of the Samba suite are: sharing Linux drives with Windows
machines, accessing SMB shares from Linux machines, sharing Linux printers with Windows
machines and sharing Windows printers with Linux machines.
Most Linux distributions provide a samba package, which does most of the server
setup and starts up smbd, the Samba server, and nmbd, the netbios name server, at boot time
by default. Samba can be configured graphically, via a web interface or via the command line
and text configuration files. The daemons make a Linux machine appear as an MS Windows
host in an MS Windows My Network Places/Network Neighbourhood window; a share from a
Linux machine will be indistinguishable from a share on any other host in an MS Windows
environment.

More information can be found at the following locations:

Man smb.conf: describes the format of the main Samba configuration file.

The Samba Project Documentation (or check your local samba.org mirror) contains an
easy to read installation and testing guide, which also explains how to configure your
Samba server as a Primary Domain Controller. All the man pages are also available
here.

1.1.2.7. Miscellaneous protocols


Linux also has support for Amateur Radio, WAN internetworking (X25, Frame Relay,
ATM), InfraRed and other wireless connections, but since these protocols usually require
special hardware, we won't discuss them in this document.
1.2 OVERVIEW-NETWORK SECURITY

Network

security consists

of

the policies adopted

to

prevent

and

monitor authorized access, misuse, modification, or denial of acomputer network and networkaccessible resources. Network security involves the authorization of access to data in a
network, which is controlled by the network administrator. Users choose or are assigned an ID
and password or other authenticating information that allows them access to information and
programs within their authority.

Network security covers a variety of computer networks, both public and private, that
are used in everyday jobs; conducting transactions and communications among businesses,
government agencies and individuals. Networks can be private, such as within a company, and
others which might be open to public access. Network security is involved in organizations,
enterprises, and other types of institutions. It does as its title explains: It secures the network,
as well as protecting and overseeing operations being done. The most common and simple
way of protecting a network resource is by assigning it a unique name and a corresponding
password.

CHAPTER 2
LITERATURE SURVEY

2.1 INTRODUCTION
Literature survey is the most important step in software development process. Before
developing the tool it is necessary to determine the time factor, economy and company
strength. Once these things are satisfied, then the next step is to determine which operating
system and language can be used for developing the tool. Once the programmers start building
the tool the programmers need lot of external support. This support can be obtained from
senior programmers, from book or from websites. Before building the system the above
consideration are taken into account for developing the proposed system.
The major part of the project development sector considers and fully survey all the
required needs for developing the project. For every project Literature survey is the most
important sector in software development process. Before developing the tools and the
associated designing it is necessary to determine and survey the time factor, resource
requirement, man power, economy, and company strength. Once these things are satisfied and
fully surveyed, then the next step is to determine about the software specifications in the
respective system such as what type of operating system the project would require, and what
are all the necessary software are needed to proceed with the next step such as developing the
tools, and the associated operations.
We used a logic-based approach to efficiently capture and utilize experts' experience,
which can be categorized as kind of knowledge-based intrusion detection. However,
knowledge-based intrusion detection relies on the establishment of a knowledge base created
from cyber-attack signatures, but building a comprehensive knowledge base that covers all
variations of attacks is impractical under large-scale networks since knowledge engineering
can be a time-consuming process. Therefore, how to effectively leverage limited number of
human experience became the second focus of our research. In this paper, we presented the
logic-based approach under an experience-driven framework, followed by the concept of
experience relaxation for mitigating the limitation of knowledge-based intrusion detection.

Our experimental results showed a significant improvement in the knowledge base coverage
by applying experience relaxation.
A Cognitive Task Analysis (CTA) was performed to investigate the workflow,
decision processes, and cognitive demands of information assurance (IA) analysts responsible
for defending against attacks on critical computer networks. We interviewed and observed 41
IA analysts responsible for various aspects of cyber defense in seven organizations within the
US Department of Defense (DOD) and industry. Results are presented as workflows of the
analytical process and as attribute tables including analyst goals, decisions, required
knowledge, and obstacles to successful performance. We discuss how IA analysts progress
through three stages of situational awareness and how visual representations are likely to
facilitate cyber defense situational awareness.
Following Chis (2006) view on the characteristics of expertise and the relative view of
expertise (Chase & Simon, 1973), a cyber-security analyst may be regarded as an expert with
high levels of proficiency in information and network security when compared to a novice
who is less knowledgeable. The term novice is used here in a generic manner, referring to a
wide spectrum of individuals with relatively no knowledge of cyber security. The term
novices also suggests that with proper training and with enough experience, individuals can
become experts. More specifically, the relative view of expertise postulates that an expert is
not
expert due to some innate talent or cognitive ability that the novice cannot possess. Rather, a
novice can become an expert with proper training. However, it is possible that some aspects of
expertise
depend

on

the

ability

to

tune

general

cognitive

skills,

like

sustained

attention and information synthesis, to a specific context, providing contextualized ways to


access and deploy domain specific knowledge (Perkins & Salomon, 1989).
Asgharpour, Liu, and Camp (2007) showed how individuals with various levels of
knowledge in information security and years of experience, may have different mental models
of cyber security. Higher proficiency in information security also suggests better performance
in cyber detection than lower levels of knowledge. Experienced individuals are expected to
make better decisions thaninexperienced ones. An expert is expected to detect features
andmeaningful patterns that anovice cannot (Shanteau, 1987). Knowledge and previous
experience should make an expert more sensitive to cues that are overlooked by a novice.

Careful attention to these cues can foster the identification of patterns that construct a problem
and

should

promote

the

choice

of

the

appropriate

courses of action. Such expertise appears to be domain specific, and it is built up through
experience and intensive practice (Randel, Pugh, & Reed, 1996). However, expertise may be
domain limited and context dependent. Expertise can also make individuals more rigid and
result in problematic adaptation in more dynamic environments (Chi, 2006). Furthermore,
depending only on domain knowledge and neglecting general cognitive skills and heuristics
can harm the ability of experts to mitigate atypical problems.
Goodall et al. (2009) studied cyber security analysts and the practical aspects of
intrusion detection. Their work particularly highlights the expertise required to successfully
accomplish the intrusion detection task. It comprises of domain knowledge in information and
network security, and also local knowledge grounded in the analysts unique environment. In
general, domain knowledge is the fundamental knowledge obtained through long and
deliberate learning (Ericsson & Lehmann, 1996). It includes theoretical knowledge that the
expert

acquires

through

formal

education, training, or certification (Chi, 2006).


Domain knowledge also includes practical knowledge learned through hands-on
practice and experience with tools, methods of operation, and workflows. Domain knowledge
acquired through formal learning processes lays the essential foundation of requisite
knowledge for the work of the cyber security analyst. However, domain knowledge may not
be enough to detect cyber-attacks in operational environments. In addition to domain
knowledge, the analystmay need situated knowledge (Goodall et al., 2004, 2009). Situated
knowledge is implicit, hard to articulate, and organization-dependent (Schmidt & Hunter,
1993). This type knowledge tends to be dynamic and the expert acquires it through continued
interactions
with a specific operating environment. In the context of information and network security,
effectively learning the nuances of a particular network is often achieved by tuning and
adjusting the IDS so it will detect threats and meet the organizations security needs without
standing in the way of legitimate network users.
Thus, for effective threat detection in a network, the analyst should know how to
operate an IDS in general and have experience in using the IDS in that specific network.
Given that cyber-attacks are represented in abnormal network activity, an analyst should

be able to define normal and abnormal network activity and utilizethese definitions to detect
attacks. As what can be considered normal network activity in one environment may be
indicative of malicious activity in another, intrusion detection depends on the ability to
integrate domain and situated knowledge in a dynamic environment (Yurcik et al., 2003).

2.2 EXISTING SYSTEM

Lot of existing intrusion Detection Systems (IDSs) examines the network packets
individually within both the web server and the database system.

However, there is very little work being performed on multitier Anomaly Detection
(AD) systems that generate models of network behavior for both web and database
network interactions.

In such multitier architectures, the back-end database server is often protected behind a
firewall while the web servers are remotely accessible over the Internet.

Unfortunately they are protected from direct remote attacks, the back-end systems are
susceptible to attacks that use web requests as a means to exploit the back end.

2.2.1 DISADVANTAGE

Firewall program used.

Degrade system performance.

Slower process.

2.3 PROPOSED SYSTEM

It is used to detect attacks in multi-tiered web services.

It can be create normality models of isolated user sessions that include both the web
front-end (HTTP) and back-end (File or SQL) network transactions.

A lightweight virtualization technique to assign each users web session to a dedicated


container, an isolated virtual computing environment.

Container ID used to accurately associate the web request with the subsequent DB
queries. Thus, Double Guard can build a causal mapping profile by taking both the
web server and DB traffic into account.

Implemented Double Guard container architecture using OpenVZ, and performance


testing shows that it has reasonable performance overhead and is practical for most
web applications.

The container-based web architecture not only fosters the profiling of causal mapping,
but it also provides an isolation that prevents future session-hijacking attacks.

Within a lightweight virtualization environment, we ran many copies of the web server
instances in different containers so that each one was isolated from the rest.

2.3.1 ADVANTAGE

Able to identify a wide range of attacks.

Easy to find the Direct DB Attack.

Lightweight virtualization technique.

CHAPTER 3

SYSTEM ARCHITECTURE
3.1 INTRODUCTION
Design is a multi- step that focuses on data structure software architecture, procedural
details, algorithm etc and interface between modules. The design process also translate the
requirements into presentation of software that can be accessed for quality before coding
begins. Computer software design change continuously as new methods; better analysis and
border understanding evolved. Software design is at relatively early stage in its revolution.
Therefore, software design methodology lacks the depth, flexibility and quantitative
nature that are normally associated with more classical engineering disciplines. However
techniques for software designs do exit, criteria for design qualities are available and design
notation can be applied.
3.2 DESIGN STRUCTURE
3.2.1 INPUT DESIGN
The input design is the link between the information system and the user. It comprises
the developing specification and procedures for data preparation and those steps are necessary
to put transaction data in to a usable form for processing can be achieved by inspecting the
computer to read data from a written or printed document or it can occur by having people
keying the data directly into the system. The design of input focuses on controlling the amount
of input required, controlling the errors, avoiding delay, avoiding extra steps and keeping the
process simple. The input is designed in such a way so that it provides security and ease of use
with retaining the privacy. Input Design considered the following things:
What data should be given as input?
How the data should be arranged or coded?
The dialog to guide the operating personnel in providing input.
Methods for preparing input validations and steps to follow when error occur.

3.2.2 OUTPUT DESIGN


A quality output is one, which meets the requirements of the end user and presents the
information clearly. In any system results of processing are communicated to the users and to

other system through outputs. In output design it is determined how the information is to be
displaced for immediate need and also the hard copy output. It is the most important and direct
source information to the user. Efficient and intelligent output design improves the systems
relationship to help user decision-making.
The output form of an information system should accomplish one or more of the following
objectives.
Convey information about past activities, current status or projections of the
Future.
Signal important events, opportunities, problems, or warnings.
Trigger an action.
Confirm an action.

Attacker
Work station

Attackers
Data packet

Data packet

Internet

Client Original
Syste

Firewall

Modem

CHAPTER 4
Attackers
packet
SYSTEM SPECIFICATION

4.1 HARDWARE SPECIFICATION

System

: Pentium IV 2.4 GHz

Hard Disk

: 40 GB

Floppy Drive

: 1.44 Mb

Monitor

: 15 VGA Colour

Mouse

: Logitech

Ram

: 512 Mb

Mobile

: Android

4.2 SOFTWARE SPECIFICATION

Operating system

: Ubuntu 12.04 lts.

Software

:NS2 2.34.

CHAPTER 5
SYSTEM IMPLEMENTATION
5.1

PROJECT MODULES

5.1.1

User Interface

5.1.2

Query Classifier

5.1.3

Access Control Manager

5.1.4

Attack Detection

5.1.1 USER INTERFACE


This is responsible for accepting user queries and to generate http requests. This
module is also responsible for displaying the query results to the user after the query has been
executed by the web server.
5.1.2 QUERY CLASSIFIER

This access control manager utilizes anomaly scores sent by the anomaly detection
module to detect attacks against back-end SQL databases by setting privilege levels.
This module is deployed between web based applications and the back-end database
server.

The SQL queries sent by the applications are captured and sent to the IDS for analysis.
The query classifier module parses each incoming SQL query and classifies them using
C4.5 algorithm and then sent it to the fuzzy anomaly detection module which applies a
fuzzy rules to generate score and level of anomaly for the query.

These anomaly scores are sent to the access control manager which forwards the query
to suitable web servers based on the anomaly scores.

5.1.3 ACCESS CONTROL MANAGER

The architecture of this anomaly detection system necessitates the existence of an


access control manager between the query classification component and web servers.

This manager is utilized when a malicious web request that was let through by the
query classifier to access the web server which can be checked for privilege level using
anomaly score.

Access control can choose to update privilege levels of the web request to control
malicious requests. This process involves characterizing the incoming anomaly using
fuzzy rules and then generating updating messages and finally updating the access
privilege levels to reflect the level of anomaly.

Three access levels namely privilege user lever, application programmer level and
nave user level are used. Queries with privilege user and application programmer level
are sent to the smart server whereas the queries with nave user levels are sent to dump
server.

5.1.4 ATTACK DETECTION

Once the model is built, it can be used to detect malicious sessions. For our static
website testing, we used the production website, which has regular visits of around 50100 sessions per day. We collected regular traffic for this production site we used the
attack tools listed in to manually launch attacks against the testing website, and we
mixed these attack sessions with the normal traffic obtained during the training phase.

1. Privilege Escalation Attack: For Privilege Escalation Attacks, according to our


previous discussion, the attacker visits the website as a normal user aiming to
compromise the web server process or exploit vulnerabilities to bypass authentication.
At that point, the attacker issues a set of privileged (e.g., admin-level) DB queries to
retrieve sensitiveInformation.

2. Hijack Future Session Attack (Web Server aimed attack): Out of the four classes
of attacks we discuss, session hijacking is the most common, as there are many
examples that exploit the vulnerabilities of Apache, IIS, PHP, ASP, and cgi.

3. Injection Attack: Here we describe how our approach can detect the SQL injection
attacks. To illustrate with an example, we wrote a simple PHP login page that was
vulnerable to SQL injection attack. As we used a legitimate username and password to
successfully log in, we could include the HTTP request in the second line.

4. Direct DB attack: If any attacker launches this type of attack, it will easily be
identied by our approach. First of all, according to our mapping model, DB queries
will not have any matching web requests during this type of attack. On the other hand,
as this trafc will not go through any containers, it will be captured as it appears to
differ from the legitimate trafc that goes through the containers. In our experiments,
we generated queries and sent them to the databases without using the web server
containers. It readily captured these queries. Snort and Green SQL did not report alerts
for this attack.

CHAPTER 6
DATA FLOE AND UML DIAGRAM
6.1 DATA FLOW DIAGRAM

send packet request

client
send packet

firewall

classify original packet and


attacked packet

Receive packet
server

Fig. Use case Diagram.

6.2 CLASS DIAGRAM

Fig. Class Diagram.

6.3 SEQUENCE DIAGRAM

Client

Firewall

Server

1: send packet request

2: send packet request

3: Send packet

4: Classify original packet and attacked packet

5: Send requested packet to client

Fig. Sequence Diagram.

6.4 COLLABORATION DIAGRAM

4: Classify original packet and attacked packet

1: send packet request


Client

Firewall
5: Send requested packet to client
3: Send packet

2: send packet request

Server

Fig. collaboration diagram

CHAPTER 7

SOFTWARE ENVIRONMENT
7.1 INTRODUCTION TO C
C is a general-purpose, high-level language that was originally developed by Dennis
M. Ritchie to develop the UNIX operating system at Bell Labs. C was originally first
implemented on the DEC PDP-11 computer in 1972.
In 1978, Brian Kernighan and Dennis Ritchie produced the first publicly available
description of C, now known as the K&R standard.
The UNIX operating system, the C compiler, and essentially all UNIX applications
programs have been written in C. The C has now become a widely used professional
language for various reasons.

Easy to learn

Structured language

It produces efficient programs.

It can handle low-level activities.

It can be compiled on a variety of computer platforms.

7.2 FACTS ABOUT C

C was invented to write an operating system called UNIX.

C is a successor of B language which was introduced around 1970

The language was formalized in 1988 by the American National Standard Institute
(ANSI).

The UNIX OS was totally written in C by 1973.

Today C is the most widely used and popular System Programming Language.

Most of the state-of-the-art softwares have been implemented using C.

Today's most popular Linux OS and RBDMS MySQL have been written in C.

Why to use C?

C was initially used for system development work, in particular the programs that
make-up the operating system. C was adopted as a system development language because it
produces code that runs nearly as fast as code written in assembly language. Some examples
of the use of C might be:

Operating Systems

Language Compilers

Assemblers

Text Editors

Print Spoolers

Network Drivers

Modern Programs

Databases

Language Interpreters

Utilities

7.3 HISTORY OF C LANGUAGE


C language has evolved from three different structured language ALGOL, BCPL and
B Language. It uses many concepts from these languages and introduced many new concepts
such as data types, struct, and pointer.
In 1988, the language was formalised by American National Standard Institute (ANSI).
In 1990, a version of C language was approved by the International Standard Organisation
(ISO) and that version of C is also referred to as C89.

Fig. History of C language.


7.4 FEATURES OF C LANGUAGE

It is a robust language with rich set of built-in functions and operators that can be used to
write any complex program.

The C compiler combines the capabilities of an assembly language with features of a highlevel language.

Programs Written in C are efficient and fast. This is due to its variety of data type and
powerful operators.

It is many time faster than BASIC.

C is highly portable this means that programs once written can be run on another machines
with little or no modification.

Another important feature of C program, is its ability to extend itself.

A C program is basically a collection of functions that are supported by C library. We can


also create our own function and add it to C library.

C language is the most widely used language in operating systems and embedded system
development today.

Fig. Features of C Language


A function is a block of code that performs a particular task. There are times when we
need to write a particular block of code for more than once in our program. This may lead to
bugs and irritation for the programmer. C language provides an approach in which you need to
declare and define a group of statements once and that can be called and used whenever
required. This saves both time and space.
C functions can be classified into two categories,

Library functions

User-defined functions

CHAPTER 8
TESTING

8.1 TESTING PROCESS


The purpose of testing is to discover errors. Testing is the process of trying to
discover every conceivable fault or weakness in a work product. It provides a way to
check the functionality of components, sub assemblies, assemblies and/or a finished
product it is the process of exercising software with the intent of ensuring that the
Software system meets its requirements and user expectations and does not fail in an
unacceptable manner. There are various types of test. Each test type addresses a
specific testing requirement.

8.2 TYPES OF TESTS


8.2.1 Unit Testing
Unit testing involves the design of test cases that validate that the internal
program logic is functioning properly, and that program input produces valid outputs.
All decision branches and internal code flow should be validated. It is the testing of
individual software units of the application .it is done after the completion of an
individual unit before integration. This is a structural testing, that relies on knowledge
of its construction and is invasive. Unit tests perform basic tests at component level and
test a specific business process, application, and/or system configuration. Unit tests
ensure that each unique path of business process performs accurately to the
documented specifications and contains clearly defined inputs and expected results.

36

Fig. 8.1 Unit Testing

8.2.2 Integration Testing


Integration tests are designed to test integrated software components to
determine if they actually run as one program. Testing is event driven and is more
concerned with the basic outcome of screens or fields. Integration tests demonstrate
that although the components were individually satisfaction, as shown by successfully
unit testing, the combination of components is correct and consistent. Integration
testing is specifically aimed at exposing the problems that arise from the combination
of components.

8.2.3 Functional Testing


Functional tests provide systematic demonstrations that functions tested are
available as specified by the business and technical requirements, system
documentation and user manuals.
Functional testing is centered on the following items:
Valid Input is used to identified classes of valid input must be accepted.
Invalid Input is used to identified classes of invalid input must be rejected.
Functions is used to identified functions must be exercised.
Output is used to identified classes of application outputs.
Systems/Procedures is used to interfacing systems or procedures must
beinvoked. Organization and preparation of functional tests is focused on requirements,
key functions, or special test cases. In addition, systematic coverage pertaining to
identify Business process flows; data fields, predefined processes, and successive

37

processes must be considered for testing. Before functional testing is complete,


additional tests are identified and the effective value of current tests is determined.

8.2.4 System Testing


System testing ensures that the entire integrated software system meets
requirements. It tests a configuration to ensure known and predictable results. An
example of system testing is the configuration oriented system integration test. System
testing is based on process descriptions and flows, emphasizing pre-driven process
links and integration points.

8.2.5 White Box Testing


White Box Testing is a testing in which the software tester has knowledge of the
inner workings, structure and language of the software, or at least its purpose. It is used
to test areas that cannot be reached from a black box level.

Fig. 8.2 White box Testing

8.2.6 Black Box Testing


Black Box Testing is testing the software without any knowledge of the inner
workings, structure or language of the module being tested. Black box tests, as most
other kinds of tests, must be written from a definitive source document, such as
specification or requirements document, such as specification or requirements

38

document. It is a testing in which the software under test is treated, as a black box .you
cannot see into it. The test provides inputs and responds to outputs without
Considering how the software works.

Fig. 8.3 Black box Testing

8.3 TEST STRATEGY AND APPROACH


Field testing will be performed manually and functional tests will be written in
detail.

8.3.1 Test Objectives


All field entries must work properly.
Pages must be activated from the identified link.
The entry screen, messages and responses must not be delayed.
Features to be tested
Verify that the entries are of the correct format
No duplicate entries should be allowed
All links should take the user to the correct page.

39

8.3.2 Integration Testing


Software integration testing is the incremental integration testing of two or more
integrated software components on a single platform to produce failures caused by
interface defects.
The task of the integration test is to check that components or software
applications.

8.3.3 Acceptance Testing


User Acceptance Testing is a critical phase of any project and requires
significant participation by the end user. It also ensures that the system meets the
functional requirements.

8.4 ALPHA TESTING


In software development, alpha test will be a test among the teams to confirm
that your product works. Originally, the term alpha test meant the first phase of testing
in a software development process. The first phase includes unit testing, component
testing, and system testing. It also enables us to test the product on the lowest common
denominator machines to make sure download times are acceptable and preloaders
work.

8.5 BETA TESTING


In software development, a beta test is the second phase of software testing in
which a sampling of the intended audience tries the product out. Beta testing can be
considered "pre-release testing." Beta test versions of software are now distributed to
curriculum specialists and teachers to give the program a "real-world" test.

CHAPTER 9
SYSTEM OUTPUT
9.1 SOURCE CODE
9.2 SCREENSHOTS

CHAPTER 10
CONCLUSION AND FUTURE WORK
10.1 CONCLUSION
Expertise and practical knowledge play an important role in triage analysis: the task of
classifying a network eventas a threat or not and the connections between these small decisions
and the overall attack decisions based on a sequence of network events.A security analyst needs
situated and domain knowledge to benefit from all available data sources and visualizations.
Furthermore,situated knowledge should be considered in an analysts trainingprocess. In addition
to theoretical knowledge and practical experience, analysts should also be trained to quickly
learn and adapt tonovel and dynamic environments. An analyst should constantlyupdate and
expand her situated knowledge regarding the operational environment.
Such information regarding the importanceand function of servers in the network is rarely
systematically collected into a repository and even when

collected, it is static

andbecomesoutdated ratherquickly as the network constantly changeswith new equipment being


added and the existing equipment beingmodified, upgraded, or retired. Such situated knowledge
is a prerequisite for more comprehensive and mission-oriented situationawareness. Finally,
considering the increasing number of personalnetworks that end-users deploy by themselves
(e.g., home network), the growing number and variety of devices connected tothese networks
(e.g., computers, smartphones, tablets, mediasmart-TV, etc.) and their complexity, intrusion
detection canbecome a concern of many end-users without extensive domainknowledge in
information and network security
10.2 FUTURE ENHANCEMENT
Some open issues remain to be explored in our future work. First, the proposed
mechanisms are limited to static or quasi-static wireless ad hoc networks. Frequent changes on
topology and link characteristics have not been considered. Extension to highly mobile
environment will be studied in our future work. In addition, in this paper we have assumed

that source and destination are truthful in following the established protocol because delivering
packets

end-to-end

is

in

their interest.

Misbehaving source

and

destination

will

be pursued in our future research. Moreover, in this paper, as a proof of concept, we mainly
focused on showing the feasibility of the proposed cypto-primitives and how second order
statistics of packet loss can be utilized to improve detection accuracy. As a first step in this
direction, our analysis mainly emphasize the fundamental features of the problem, such as the
untruthfulness nature of the attackers, the public verifiability of proofs, the privacy-preserving
requirement for the auditing process, and the randomness of wireless channels and packet losses,
but ignore the particular behavior of various protocols that may be used at different layers of the
protocol stack. The implementation and optimization of the proposed mechanism under various
particular protocols will beconsidered in our futurestudies.

CHAPTER 11
REFERENCES
1. Asgharpour, F., Liu, D., & Camp, L. J. (2007). Mental models of computer security risks.
In Proccedings of the 6th annual workshop on the economic of information security
(WEIS 2007).
2. Ben-Asher, N., Meyer, J., Mller, S., & Englert, R. (2009). An experimental system for
studying the tradeoff between usability and security. Proceedings of the international
conference on availability, reliability and security: ARES09 (pp. 882887). Los
Alamitos, CA: IEEE. http://dx.doi.org/10.1145/ 1280680.1280693.
3. Botta, D., Werlinger, R., Gagn, A., Beznosov, K., Iverson, L., Fels, S., et al. (2007).
Towards understanding IT security professionals and their tools. In Proceedings of the
third symposium on usable privacy and security (pp. 100111). New York, NY: ACM.
http://dx.doi.org/10.1145/1280680.1280693.
4. Gonzalez, C., Ben-Asher, N., Oltramari, A., & Lebiere, C. (2014). Cognition and
technology.
5. In A. Kott, C. Wang, & R. Erbacher (Eds.), Cyber defense and situation awareness (pp.
93117).
6. Gonzalez, C., Vanyukov, P., & Martin, M. K. (2005). The use of microworlds to study
dynamic decision making. Computers in Human Behavior, 21(2), 273286. http://
dx.doi.org/10.1016/j.chb.2004.02.014.
7. Goodall, J. R., Lutters, W. G., & Komlodi, A. (2009). Developing expertise for network
intrusion detection. Information Technology & People, 22(2), 92108. http://
dx.doi.org/10.1108/09593840910962186.
8. Goodall, J. R, Lutters, W. G., & Komlodi, A. (2004). I know my network: Collaboration
and expertise in intrusion detection. In J. Herbsleb & G. Olson (Eds.), Proceedings of the
2004 ACM conference on computer supported cooperative work (pp. 342345). New
York, NY: ACM. http://dx.doi.org/10.1145/ 1031607.1031663
9. M. Burkhart, P. von Rickenbach, R. Wattenhofer, and A. Zollinger, Does topology
control reduce interference, inProc. ACM 5th Int. Symp. Mobile Ad Hoc Netw.Comput.
2004, pp. 919.

10. D. M. Blough, M. Leoncini, G. Resta, and P. Santi, The k-neighbors approach to


interference bounded and symmetric topology control in ad hoc networks, IEEE Trans.
Mobile Comput., vol. 5, no. 9, pp. 12671282, Sep. 2006.
11. J. Kim and Y. Kwon, Interference-aware topology control for low rate wireless personal
area networks, IEEE Trans. Consum. Electron., vol. 55, no. 1, pp. 97104, Feb. 2009.
12. A. Muqattach and M. M. Krunz, A distributed transmission power control protocol for
mobile ad hoc networks, IEEE Trans. Mobile Comput., vol. 3, no 2, pp. 113128, Apr.
Jun. 2004.
13. S. C. Wang, D. S. Wei, and S.-Y. Kuo, An SPT-based topology control algorithm for
wireless ad hoc networks, Comput. Commun. vol. 29, no. 16, pp. 30923103, 2007.
14. M. Kadivar, M. E. Shiri, and M. Dahghan, Distributed topology control algorithm based
on one- and two-hop neighbors information for ad hoc networks, Comput. Commun.
vol.32,no. 2, pp. 368375, 2009.
15. D. Y. Xue and E. Ekici, Delay-guaranteed cross-layer scheduling in multihop wireless
networks, IEEE/ACM Trans. Netw., vol. 21,no. 6, pp. 16961707, Dec. 2013.
16. W. Galuba, P. Papadimitratos, M. Poturalski, K. Aberer, Z. Despotovic, and W. Kellerer,
Castor: Scalable secure routing for ad hoc networks, inProc. IEEE INFOCOM, Mar.
2010, pp. 1 9.
17. T. Hayajneh, P. Krishnamurthy, D. Tipper, and T. Kim, Detecting malicious packet
dropping in the presence of collisions and channel errors in wireless ad hoc networks,
inProc. IEEE Int. Conf. Commun., 2009, pp. 10621067.
18. Q. He, D. Wu, and P. Khosla, Sori: A secure and objective reputation-based incentive
scheme for ad hoc networks, in Proc. IEEE Wireless Commun. Netw. Conf., 2004, pp.
825830.
19. D. B. Johnson, D. A. Maltz, and J. Broch, DSR: The dynamic source routing protocol
for multi-hop wireless ad hoc networks, in Ad Hoc Networking. Reading, MA, USA:
Addison-Wesley, 2001, ch. 5, pp. 139172.
20. W. Kozma Jr. and L. Lazos, Dealing with liars: Misbehavior identification via RenyiUlam games, presented at the Int. ICST Conf. Security Privacy in Commun. Networks,
Athens, Greece, 2009.

21. W. Kozma Jr., and L. Lazos, REAct: Resource-efficient accountability for node
misbehavior in ad hoc networks based on random audits, inProc. ACM Conf. Wireless
Netw. Secur., 2009, pp. 103110.
22. K. Balakrishnan, J. Deng, and P. K. Varshney, TWOACK: Preventing selfishness in
mobile ad hoc networks, inProc. IEEEWireless Commun. Netw. Conf., 2005, pp. 2137
2142.
23. D. Boneh, B. Lynn, and H. Shacham, Short signatures from the weil pairing,J. Cryptol.,
vol. 17, no. 4, pp. 297319, Sep. 2004.
24. S. Buchegger and J. Y. L. Boudec, Performance analysis of theconfidant protocol
(cooperation of nodes: Fairness in dynamic adhoc networks), inProc. 3rd ACM Int.
Symp. Mobile Ad Hoc Netw.Comput. Conf., 2002, pp. 226236.
25. L. Buttyan and J. P. Hubaux, Stimulating cooperation in selforganizing mobile ad hoc
networks,ACM/Kluwer Mobile Netw.Appl., vol. 8, no. 5, pp. 579592, Oct. 2003.

You might also like