You are on page 1of 93

International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 67-74

The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)

67

Responding to Identity Crime on the Internet
Eric Holm
The Business School. University of Ballarat
PhD Candidate at Bond University
P.O. Box 663, Ballarat VIC 3353
e.holm@ballarat.edu.au


ABSTRACT

This paper discusses the unique challenges
of responding to identity crime. Identity
crime involves the use of personal
identification information to perpetrate
crimes. As such, identity crime involves
using personal and private information to for
illegal purposes. In this article, the two
significant issues that obstruct responses to
this crime are considered. These are first, the
reporting of crime, and second the issue of
jurisdiction. The paper also presents an
exploration of some responses to identity
crime.

KEYWORDS

Identity crime, regulation, online fraud,
jurisdiction, personal information.

1 INTRODUCTION

Certain information is worth money
whereas other information is worthless
when it comes to crimes involving
identity [1]. The information that is
valuable to the identity criminal is that
which can be converted into gain,
typically by way of fraudulent activity
[2]. Certain information, particularly
personal identification provides
opportunities for identity criminals to
either obtain credit under false pretenses
or to impersonate another for like
purposes [2]. Personal identification
particulars include social security details,
drivers license details, passport details
as well as other information [2]. The
theft of identity particulars may be the
catalyst for a number of crimes that
follow. The offenses that may follow can
include fraud, money laundering,
organized crime and even acts of
terrorism [2].

There are two variations to identity crime
committed by an identity criminal. The
first is through the assumption of parts of
anothers identity to perpetrate the crime
[3]. This involves the criminal using
parts of the victims identity to obtain
goods or service, for instance [3]. The
second is through the assumption of
identity wholly which involves the
criminal basically becoming the victim
[4]. This involves, establishing lines of
credit while impersonating the victim.
Each type of identity crime has costly
implications for an individual [5].

Identity crime is reliant upon information
[6]. Much of the information used for
identity crimes is obtained through
various means on the Internet. A study
conducted in the United States on
identity crime found that the most
common method used for obtaining
information was to purchase the
information on the Internet [7]. However,
information is also obtained by other
means such as through committing
computer crimes including spam, scams
and phishing [8] as well as other crimes.
Importantly it is the availability of
personal information that is the enabler
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 67-74
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)

68

for identity crime [2]. Sometimes this
information can simply be acquired
through the interpersonal exchanges that
take place on the Internet, such as,
through social networking [9].

The misuse of information for identity
crime occurs typically when information
is used for gain [4]. However, not all
identity crime leads directly to a financial
gain and there may be other motivations
for committing such crime, like avoiding
criminal sanctions [10]. Therefore, the
impetus for such crime is dependent
upon the motivation of the offender [10].

There is debate as to whether identity
crime is more prominent on the Internet
[11] or elsewhere. Interestingly,
sometimes components of this crime may
take place both online and offline [12].
However, an important reason why so
much identity crime takes place on the
Internet is that a significant amount of
personal identification information is
stored on the Internet as well as there
being ample targets [13].

The exposure to risk of an individual
online is dependent on many things.
Information is exchanged on the Internet
not only by individuals, but by
governments and corporations [14].
While it is argued that the decision to
interact on the Internet is associated with
exposing oneself to greater risk, [15]
ultimately a latent risk subsists for all
information on the Internet [16]. Indeed,
it seems that the greater the amount of
personal information on the Internet, the
greater the risk a person has of becoming
a victim of identity crime.

Information is used in a variety of ways
to perpetrate identity crime. According
to the Social Security website, a
common example is the misuse of social
security numbers in the United States
[17]. This personal identifier is a key
identification detail that can be used in
conjunction with a persons name to
establish identity. This information is
used by the identity criminal to use or
establish an identity for crime [17]. Other
notable personal identification
information includes passports, birth
dates and bank details, but is not limited
to these [18].

2 THE ISSUE OF RELIABLE DATA

The losses attributable to identity crime
can be measured by monetary losses [19]
but a number of additional offenses can
be committed once personal information
is stolen. In Australia it has been
suggested that identity crime is one of
the more prominent emerging types of
fraud [20]. However, one of the
challenges of recording this crime in is
that identity crimes are at times
subsumed into the recorded incidence of
other crime such as fraud [21]. The
misreporting of this crime tends to distort
the reliability of data that pertain to the
measurement of identity crime [22].
Importantly, different ways of reporting
the crime result in different responses to
such crimes [2].

In 2012, the Australian Bureau of
Statistics (ABS) estimated that
approximately three per cent of the
Australian population had become
victims of identity crime [23]. The most
significant implication of this crime was
financial [24]. In 2006, the losses arising
from identity crime in the United
Kingdom economy was $1.7 billion [25].
The United Kingdom figure took into
account the cost of preventative
measures as well as the costs associated
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 67-74
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)

69

with the prosecution of cases. In many
statistics relating to identity crime, the
wider losses attributable to identity crime
are not considered despite being
significant.

Conservatively there are significant costs
associated with identity crime that have
been estimated at tens of billions of
dollars worldwide [26]. However, it is
difficult to gather an accurate view of the
total cost attributable to this crime
because instances of identity crime are
not always reported. For instance, the
ABS suggests that only 43 per cent of
victims of crimes involving credit and
bank cards in 2007 were prepared to
report this crime to police [27]. This
suggests a significant proportion of
identity crime relating to credit and debit
cards is not reported [28]. This distorts
the statistics on the true incidence of
identity crime.

The direct monetary losses arising from
identity crime are more easily
quantifiable but the indirect losses
remain more difficult to measure. A cost
rarely considered is the indirect cost
related to a victim psychologically [29].
Likewise, there are losses attributable to
lost trust that can also be difficult to
measure [30]. In addition, there is a
hidden cost associated with reputational
damage that is similarly difficult to
reflect in monetary terms partly due to
the intangible nature of this loss [31].
These indirect costs are also rarely
considered in the statistics that pertain to
identity crime.

There are costs with the preventative
measures [32] taken to reduce identity
crime which are not contemplated when
measuring the impact of this crime.
Indeed, there are numerous preventative
steps that can be taken to overcome the
threats of cyber-crime. For instance there
may be preventative measures taken
through technological means [33] as well
as physical security measures [34]. These
have a cost associated with them and this
cost is seldom incorporated into the
overall costs associated with crime [35].

There are broader implications of
identity crime on national economies that
have scarcely been researched [36]. What
remains difficult to ascertain is how
extensive the impact of this crime is
globally [2]. Where losses are sustained,
these are not recorded on any global
register of losses but rather are recorded
domestically [37]. Further, there is no
central repository of data pertaining to
identity crime; the data gathered are both
varied and dispersed [38]. This makes
the reporting of accurate global statistics
on this crime problematic. A central
repository of information that pertains to
victimization arising from identity crime
would be most useful for law
enforcement efforts [39].

3 THE ISSUE OF JURISDICTION

There is no central body that controls
information dissemination on the
Internet. The Internet itself is dispersed
and thereby transcends all jurisdictional
boundaries. This presents difficulties in
responding to identity crime in terms of
the coordination of investigation and
enforcement efforts [40]. Furthermore,
the regulatory responses to identity crime
also vary depending on the particular
emphasis that is placed upon the
regulatory responses to these
domestically [41]. There are variations in
the way in which identity crime is dealt
with. As most responses to identity crime
are dealt with through domestic criminal
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 67-74
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)

70

sanctions, these differences identify the
domestic priorities placed on the
responses to this crime.

Contrasts can be made in the regulatory
responses to this crime. For example, in
the United States, the penalties
applicable under federal law are fifteen
years imprisonment and a fine [42].
Comparatively, Australian offenses
under Commonwealth Law have
penalties with a maximum of five years
imprisonment [43]. Likewise, differences
also exist in regard to the restorative
functions of these laws. The variations
in penalties as well as other functions
point out the different importance placed
on this crime.

Similar variations in regulatory responses
exist within the states and territories of
Australia. While one state may react to
the crime of dealing with and possession
of identification material with
imprisonment for five years [44] another
may prescribe a penalty of seven years
[45]. Furthermore, other jurisdictions,
such as the Northern Territory, do not
have offenses that recognize identity
crime as the core offense and instead
they deal with this through other offenses
[46]. There are also varied responses to
restorative justice.

The issue of jurisdiction stems from the
ability of the state to bring an action
against the identity criminal. Historically,
the effects doctrine has been adopted as a
way to justify a state taking action
against the individual [47]. This doctrine
applies where the harm is linked to the
state [47]. This approach has been
utilized as a justification for which an
action to apply criminal sanctions may be
taken [48]. This doctrine provides for a
state to exercise jurisdiction outside its
physical location [49]. For identity
crime, this could enable a state to bring
an action against an offender in another
state, provided it could be ascertained
that an effect of the actions of such an
offender caused a crime to be committed
within the domestic territory [50].

Another challenge in regulating identity
crime is that, the responses to this crime
are dealt with by domestic laws and
therefore the responsibility for
investigation and enforcement belong to
the state concerned [51]. This brings into
question the domestic authoritys
capacity to deal with such crime which
may be influenced by the scarcity of
resources that exist for law enforcement
[52]. A consequence of this is that,
important technical, social and legal
information pertaining to that crime are
often not shared [53]. However,
regulatory responses are not the only
way in which this crime can be dealt
with and these will be further explored
in the outline of responses to identity
crime that follow.

4 THE RESPONSES TO IDENTITY
CRIME

4.1 Regulatory responses

A number of developments
internationally will positively influence
the regulatory response to identity crime.
An important recent development is the
Council of Europe Convention on
Cybercrime which is an international
agreement supporting and enhancing the
investigation and enforcement of
domestic law relating to cyber-crime
internationally [54]. The importance of
this convention for identity crime resides
in the enhancements that can be made in
the facilitation and cooperation of law
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 67-74
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)

71

enforcement efforts toward cyber-crimes
on the Internet [54]. Signatories to such
Conventions typically improve their
interrelations with other countries
specifically in terms of investigation and
cooperation efforts [55]. While this
Convention does not specifically mention
identity crime it nonetheless will impact
on this crime through the enhancements
in cooperation of law enforcement efforts
around cyber-crimes [55].

Jurisdictions are boundaries that are
problematical when applied to the
Internet [56]. However, the Convention
on Cybercrime has received attention
because it prompts cooperation and
reliance on domestic laws in dealing
with jurisdictional issues around
cybercrime [57]. This has a positive
influence on the way cyber-crimes are
dealt with domestically [58]. Australia is
working toward accepting this
convention [59].

4.2 Technological responses

This paper has not sought to provide an
exhaustive coverage of any specific
responses to identity crime but rather it
traverses the key responses that have
been identified in the literature. In
relation to technological responses,
authentication provides an important way
of identifying an individual [33] with
whom one conducts transactions with on
the Internet [60]. Another technological
response that is helpful in preventing the
unauthorized interception of data is
encryption [61]. However these
technological responses remain
susceptible to the more sophisticated
forms of attack [61]. Another weakness
of such responses is the human beings
involved with dealing with such
measures [62].

Authentication is an important response
to identity crime because this crime
involves the assumption of another
identity and authentication aims to
prevent such actions [63]. Therefore, this
technological response facilitates the
security around the ascertainment of
identity [33]. This is an important
response in dealing with identity crime
because it has a focus on preventing the
assumption of identity which is a key
aspect of this crime.

Encryption is a technological solution
that protects data transfer when
information is exchanged on the Internet
[61] Encryption provides a protective
measure in relation to data that are
transferred between computers connected
together [64]. Therefore this response
plays an important role in the prevention
of identity crime through enhancing data
security [65].

4.3 Education as a response
There seems to be a lack of appreciation
of the vulnerabilities arising from
identity crime. Individuals have become
the focus of this crime because they are
the easier target [66]. Furthermore,
individuals are becoming the more
common target due to their lack of
knowledge regarding identity crime [67].
It has been suggested that a key
weakness in cyber security is the human
and computer interface [68]. Indeed,
there are behavioral factors that influence
the way in which individuals exchange
information across the Internet [69].
Therefore it is important to understand
this relationship and to work on
enhancing knowledge with respect to the
vulnerabilities arising from this crime
[67]. However, the educative process
cannot be focused at the organisation or
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 67-74
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)

72

institution and rather needs to be focus
also on the individual [70].

The computer and human interface is
important in understanding cyber-crimes.
While the computer can have robust
methods of security, the human has
become the weak link in the overall
security in place to prevent cybercrime
[71]. The human aspect of this interface
means that humans are now the target
due to their vulnerabilities [68]. It is for
this reason that educative responses to
identity crime need to be expansive.

The discussion of responses to identity
crime aims to identify the more
prominent responses to this crime and is
far from exhaustive. There are additional
responses such as governmental and
organizational responses that have not
been discussed in this paper [72].
However, this presents opportunities for
further research.

5 CONCLUSION

In reflecting back on the title of this
paper, it is clear that there are challenges
in responding to identity crime. The
responses listed cannot work in isolation
to be effective. All responses to this
crime rely on data relating to it. Then
there are the issues pertaining to
jurisdiction. Interestingly, the issues of
data and jurisdiction remain closely
intertwined. The lack of data relating to
identity crime has a stifling effect on the
response to this crime. The catalyst for
change in relation to responding to this
crime will need to come from
improvements in the reporting of the
crime, which will then prompt more
work in resolving the jurisdictional
issues. In the absence of this, the true
incidence of identity crime will remain
concealed and jurisdictional boundaries
will continue to present barriers in
responding to this crime.

6 REFERENCES

1. Forester, T., Morrison, P.: Computer Ethics:
Cautionary Tales and Ethical Dilemmas in
Computing. MIT Press, Boston MA (1994).
2. Saunders, K., Zucker, B.: Counteracting
Identity Fraud in the Information Age: The
Identity Theft and Assumption Deterrence
Act., Cornell Journal of Law and Public
Policy 8, 661-666 (1999).
3. Office of the Australian Information
Commissioner,
http://www.oaic.gov.au/publications/reports/
audits/document_verification_service_audit
_report.html.
4. Australian Federal Police,
http://www.afp.gov.au/policing/fraud/identit
y-crime.aspx.
5. Public Interest Advocacy Centre,
http://www.travel-
net.com/~piacca/IDTHEFT.pdf.
6. Department of Justice,
http://www.cops.usdoj.gov/files/ric/Publicati
ons/e05042360.txt.
7. Anderson, K.: Who are the victims of
identity theft? The effect of demographics.
Journal of Public Policy and Marketing 25,
160-171 (2006).
8. Australian Competition and Consumer
Commission,
http://www.accc.gov.au/content/item.phtml?
itemId=816453&nodeId=ef518e04976145ff
ed4b13dd0ecda1a6&fn=Little%20Black%2
0Book%20of%20Scams.pdf.
9. Wei, R.: Lifestyles and New Media:
Adoption and Use of Wireless
Communication Technologies in China.
New Media & Society 8, 991-1008 (2006).
10. State of New Jersey Commission of
Investigation and Attorney-General of New
Jersey,
http://csrc.nist.gov/publications/secpubs/co
mputer.pdf.
11. Public Interest Advocacy Centre,
http://www.travel-
net.com/~piacca/IDTHEFT.pdf.
12. Organisation for Economic Development,
http://www.oecd.org/dataoecd/35/24/4064.
13. Quirk, P., Forder, J: Electronic Commerce
and the Law. John Wiley & Sons Australia,
Ltd, Milton, Qld (2003).
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 67-74
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)

73

14. Australian Office of the Australian
Information Commissioner,
http://www.privacy.gov.au/faq/smallbusines
s/q2.
15. Bossler, A., Holt, T.: The effect of self-
control on victimization in the Cyberworld.
Journal of Criminal Justice 38, 227236
(2010).
16. PCWorld,
http://www.docstoc.com/docs/51221743/PC
-World-September-2010.
17. Social Security Administration,
http://www.ssa.gov/pubs/10064.html/.
18. Australian Government,
http://www.cybersmart.gov.au/Schools/Com
mon%20cybersafety%20issues/Protecting%
20personal%20information.aspx.
19. State of New Jersey Commission of
Investigation and the Attorney General of
New Jersey,
http://csrc.nist.gov/publications/secpubs/co
mputer.pdf.
20. Queensland Police Fraud Investigative
Group,
http://www.police.qld.gov.au/Resources/Inte
rnet/services/reportsPublications/documents/
page27.pdf.
21. Australian Institute of criminology,
http://www.aic.gov.au/publications/current
%20series/tandi/381-400/tandi382.aspx.
22. Grabosky, P., Smith, R., Dempsey, G.:
Electronic theft: unlawful acquisition in
cyberspace. Cambridge: Cambridge
University Press, United Kingdom (2001).
23. Australian Bureau of Statistics,
http://www.abs.gov.au/ausstats/abs@.nsf/Lo
okup/65767D57E11FC149CA2579E400120
57F?opendocument.
24. Lynch, J.: Identity Theft in Cyberspace:
Crime Control Methods and Their
Effectiveness in Combating Phishing
Attacks. Berkeley Technology Law Journal
20, 266-67 (2005).
25. Home Office Identity Fraud Steering
Committee,
http://www.identitytheft.org.uk/faqs.asp.
26. Willox, N., Regan, T.: Identity fraud:
Providing a solution. Journal of Economic
Crime Management 1, 1-15 (2002).
27. Australian Bureau of Statistics,
http://www.ausstats.abs.gov.au/Ausstats/sub
scriber.nsf/0/866E0EF22EFC4608CA25747
40015D234/$File/45280_2007.pdf.
28. National Consumer Council,
http://www.talkingcure.co.uk/articles/ncc_m
odels_self_regulation.pdf.
29. Black, P.:Phish to fry: responding to the
phishing problem. Journal of Law and
Information Science 73, 73-91 (2005).
30. Jarvenpaa, S., Tractinsky, N., Vitale, M.:
Consumer Trust in an Internet Store:
Across-Cultural Validation. Journal of
Computer Mediated Communication 5. 45-
71 (1999).
31. Parliamentary Joint Committee on the
Australian Crime Commission,
http://www.parliament.wa.gov.au/intranet/li
bpages.nsf/WebFiles/Hot+topics+-
+organised+crime+cttee+rept/$FILE/hot+to
pics+-
+Aust+Crime+Commiss+cttee+rept.pdf.
32. Sullivan, R.: Payments Fraud and Identity
Theft? Economic Review 3, 36-37 (2008).
33. Morrison, R.: Commentary: Multi-Factor
Identification and Authentication.
Information Systems Management 24, 331-
332 (2007).
34. Baker, R.: An Analysis of Fraud on the
Internet. Internet Research: Electronic
Networking Applications and Policy 9, 348-
360 (1999).
35. Felson, M.: Crime and Everyday Life,
Insight and Implications for Society. Sage,
Thousand Oaks, CA (1994).
36. Organisation for Economic Co-operation
and Development,
http://www.oecd.org/dataoecd/49/39/408791
36.pdf.
37. Australian Institute of Criminology,
http://www.aic.gov.au/publications/current
%20series/tandi/381-400/tandi382.aspx.
38. United States Department of Justice,
http://www.ncjrs.gov/pdffiles1/nij/grants/21
0459.pdf.
39. Organisation for Economic Cooperation and
Development,
http://www.oecd.org/dataoecd/49/39/408791
36.pdf.
40. Smith, R.: Examining Legislative and
Regulatory Controls on Identity Fraud in
Australia. In: Proc. 2002 Marcus Evans
Conferences, pp.7-12, Sydney (2002).
41. Towell, E., Westphal, H.: Investigating the
future of Internet regulation 8, 26-31 (1998).
42. 18 U.S.C. 1028A (2004).
43. Commonwealth Criminal Code 1995 (Cth)
Div 372 (1)(b).
44. Queensland Criminal Code 1899 (Qld) s
408D(7).
45. Criminal Code Compilation Act 1913 (WA)
s 490(1)(a).
46. Criminal Code Act 2009 (NT) s 276(1)(a).
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 67-74
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)

74

47. Coppel, J.: A Hard Look at the Effects
Doctrine of Jurisdiction in Public
International Law. Leiden Journal of
International Law 6, 73-90 (1993).
48. United States v. Aluminum Co. of America,
148 F.2d 416, 444 (2d Cir. 1945).
49. Hartford Fire Insurance Co. v. California,
113 S. Ct. 2891 (1993)
50. Gencor Ltd v. Commission [1999] ECR II-
753 at paras. 89-92.
51. Svantesson, S.: Jurisdictional Issues in
Cyberspace: At the Crossroads The
Proposed Hague Convention and the Future
of Internet Defamation. Computer Law &
Security Report 18, 191 - 196 (2002).
52. Bolton, R., Hand, D.: Statistical Fraud
Detection: A Review. Statistical Science 17,
235-255 (2002).
53. United Nations Office on Drugs and Crime,
http://www.unodc.org/documents/data-and-
analysis/tocta/TOCTA_Report_2010_low_r
es.pdf.
54. European Convention on Cyber Crime,
opened for Signature 23 November 2001,
CETS No. 185, art 185 (Entered into force 1
July 2004).
55. Attorney-General for Australia,
http://conventions.coe.int/Treaty/EN/Treatie
s/html/185.htm.
56. Fitzgerald, B.: Fitzgerald, A.: Beale, T.:
Lim, Y.: Middleton, G.: Internet and E-
Commerce Law Technology Law and
Policy. Law Book Co, Pyrmont, NSW
(2007).
57. Parliament of Australia,
http://www.aph.gov.au/Parliamentary_Busin
ess/Bills_Legislation/Bills_Search_Results/
Result?bId=r4575.
58. Australian Government Information
Management Office,
http://www.finance.gov.au/publications/futu
re-challenges-for-
egovernment/docs/AGIMO-FC-no14.pdf>.
59. Australian Government,
http://www.ema.gov.au/www/agd/rwpattach
.nsf/VAP/(8AB0BDE05570AAD0EF9C283
AA8F533E3)~TSLB+-+LSD+-
+FINAL+APPROVED+public+consultation
+paper+-+cybercrime+convention+-
+15+February+2011.pdf/$file/TSLB+-
+LSD+-
+FINAL+APPROVED+public+consultation
+paper+-+cybercrime+convention+-
+15+February+2011.pdf.
60. OFarrell, N., Outllet, E., Outllet, E.: Hack
Proofing your wireless network. Syngress
Publishing, Rockland, MA (2002).
61. Broadhurst, R., Grabosky, P.: Computer-
related Crime in Asia: Emergent Issues. In:
Broadhurst, R., Grabosky, P. (eds) Cyber-
Crime: The Challenge in Asia, Hong Kong
University Press, pp.1-26. (2005).
62. Sullivan, R,: Can Smart Cards Reduce
Payments Fraud and Identity Fraud.
Economic Review 3 (2008).
63. Model Criminal Code Officers Committee
of the Standing Committee of Attorneys-
General,
http://www.scag.gov.au/lawlink/SCAG/ll_sc
ag.nsf/vwFiles/MCLOC_MCC_Chapter_3_I
dentity_Crime_-_Final_Report_-
_PDF.pdf/$file/MCLOC_MCC_Chapter_3_
Identity_Crime_-_Final_Report_-_PDF.pdf.
64. Broadhurst, R.: Grabosky, P.: Computer-
related Crime in Asia: Emergent Issues. In:
Broadhurst, R., Grabosky, P. (eds.) Cyber-
Crime: The Challenge in Asia, pp 15-17.
Hong Kong University Press (2005).
65. Ferguson, N.: Schneier, B.: Practical
Cryptography (Wiley, New York, NY
(2003).
66. Australian Institute of Criminology,
http://www.aic.gov.au/documents/9/3/6/%7
B936C8901-37B3-4175-B3EE-
97EF27103D69%7Drpp78.pdf.
67. Community for Information Technology
Leaders,
http://www.cioupdate.com/technology-
trends/cios-cybercrime-and-wetware.html.
68. Symantec,
http://www.symantec.com/specprog/threatre
port/ent-
whitepaper_symantec_internet_security_thre
at_report_x_09_2006.en-us.pdf.
69. Stajano, F.: Understanding Scam Victims:
Seven Principles for Systems Security.
Communications of the ACM 44, 70 (2011).
70. Bard Prison Initiative,
http://www.stcloudstate.edu/continuingstudi
es/distance/documents/EducationasCrimePre
ventionTheCaseForReinstatingthePellGrantf
orOffendersKarpowitzandKenner.pdf.
71. Federal Reserve Bank of Kansas City,
http://www.kansascityfed.org/PUBLICAT/e
conrev/pdf/3q08sullivan.pdf.
72. Benson, M.: Offenders or Opportunities:
Approaches to Controlling Identity Theft.
Criminology & Public Policy 8, 231236
(2009).
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 75-81
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)

75

Authenticating Devices in Ubiquitous Computing
Environment


Kamarularifin Abd Jalil
1
, Qatrunnada Binti Abdul Rahman
2

Faculty of Computer and Mathematical Sciences
Universiti Teknologi MARA,
40450 Shah Alam,
Selangor, Malaysia.
kamarul@tmsk.uitm.edu.my
1
, qatrunnada.abd.rahman@gmail.com
2
Abstract The deficient of a good authentication protocol in a
ubiquitous application environment has made it a good target for
adversaries. As a result, all the devices which are participating in
such environment are said to be exposed to attacks such as
identity impostor, man-in-the-middle attacks and also
unauthorized attacks. Thus, this has created skeptical among the
users and has resulted them of keeping their distance from such
applications. For this reason, in this paper, we are proposing a
new authentication protocol to be used in such environment.
Unlike other authentication protocols which can be adopted to be
used in such environment, our proposed protocol could avoid a
single point of failures, implements trust level in granting access
and also promotes decentralization. It is hoped that the proposed
authentication protocol can reduce or eliminate the problems
mentioned.
Keywords: Authentication protocol, Ubiquitous Computing,
application security, decentralize.
I. INTRODUCTION
Ubiquitous computing can be said as the latest
paradigm in the world of computers today. It allows devices
and systems to be integrated and embedded together with
computing and communication systems through wireless
transmission [1]. In a related work, Weiser [2] has defined
ubiquitous computing as a model of computing in which
computation is everywhere and computer functions are being
integrated into everything. It will be built into the basic objects
(smart devices), environments and the activities of our
everyday lives in such a way that no one will notice its
presence.
In a ubiquitous system, information can be processed
and delivered seamlessly among the participating devices
without the users even notice it. This is in contrast with what
is being practiced in a non-ubiquitous computing environment
whereby the users themselves have to make certain
adjustments (to the devices) in order to suit the current
computing environment they are in. These capabilities might
sound a bit futuristic, but in reality, the technology is already
here.
Basically, any device that can be connected to a
network via a wired or wireless link can be included in a
ubiquitous computing environment. However, nowadays, such
devices are referring to the smart devices which are portable
and connected to each other via wireless technologies such as
the Bluetooth, Wi-Fi, 3G, 4G and etc. Some of these devices
might be used to browse the Internet and some are partially
autonomous and have the capability to sense their
environment as discussed in [3]. With these capabilities,
information dissemination is just at anyones finger tips.

Figure 1.Ubiquitous Computing
Unfortunately, in this time and age, information can
be easily misused or manipulated if not protected. The
information that flows in the environment could fall into the
wrong hands and could be manipulated maliciously. Such
information can be said to be exposed to attacks such as
unauthorized manipulation, illicit access, and also disruption
of computing data and services. There have been many works
to solve these problems, and using authentication protocol is
one of them. Authentication protocol can ensure that users
information and privacy are safeguarded. In section III, some
authentication protocols will be explained. These protocols
can be seen as the potential candidates to be used in the
ubiquitous computing applications. However, as mentioned in
section III, it was found that all these candidates cannot satisfy
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 75-81
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)

76

the needs of a ubiquitous computing application and that is
why we are proposing a multi devices authentication protocol.
II. COMPUTER SECURITY COMPONENTS
Computer security as defined by NIST [4] is the
defenses employed by information systems to maintain three
elements and those are confidentiality, integrity and
availability of its computing resources. The three elements
mentioned in the definition are essentials to information
systems security purposes as elaborated in [5] and often
referred as the CIA triad. In order to fulfill these security
objectives, information system developers and organization
security managers are following security architecture for OSI
which is featured in ITU-T Recommendation i.e. the X.800
[6], a standard for providing security. It emphasizes on
Security Attack, Security Mechanism and Security Service.
Our research is about utilizing some of these Security
Mechanisms and Security Services to avert Security Attacks
prominent in ubiquitous computing applications.
Security Service, according to X.800 [6], is a service
offered by a layer of corresponding open systems that ensures
sufficient protection of the systems or data transfers. There are
five types of services, namely: Non-repudiation,
Authentication, Data Integrity, Data Confidentiality and
Access Control. Since this paper deals with authenticating
devices in Ubiquitous computing environment, the focus will
be more on Authentication service. Authentication is about
making sure interacting entities are who they claimed to be.
The X.800 standard has divided the Authentication service
into two particular services, Data-origin authentication and
Peer Entity authentication. The purpose of this paper is to
provide Peer Entity authentication type of service which is to
grant assurance and trust among interacting entities.
On the other hand, Security Mechanism is a method
to avoid, detect or recover from security attacks. It is divided
into two categories, Specific Security mechanisms, which may
be deployed in any protocol layer or Security Services and
Pervasive Security mechanisms, which are not particular to
any protocol layer or Security Services. Moreover, there are
many different types of Security Mechanisms and further
elaboration on these can be seen in [6]. For this paper, only
three mechanisms will be utilized in the development of the
new authentication protocol. Those three Security
Mechanisms which falls under Specific Security mechanism
category are; Authentication Exchange, Digital Signature, and
Encipherment. The purpose of Authentication Exchange is to
identify an entity through the exchange of information
meanwhile Digital Signature will provide integrity to the
information so that its origin will not be doubted whereas
Encipherment will alter the information, making it unreadable
during transmission of the information.
In order to create the new authentication protocol,
basic missions in a security service need to be established,
indeed, Stallings [7] has identified four significant missions
that can fit into any security services. The first one is an
algorithm needs to be created for security purposes. The
second one is generation of secret information to be utilized
with the algorithm and this secret needs to be conveyed, so,
the third mission is to create a process to satisfy that purpose.
The last mission is to identify a protocol in order to utilize the
secret information and the algorithm to fulfill certain security
service. Figure 2 is a depiction of a basic form of network
security for two or more interacting entities that can be fitted
by security services and security mechanisms discussed earlier
in order to secure particular network services.

Figure 2.Basic form of network security
Source: [7]

All in all, based on the discussion earlier we know
that there are many security services which implement certain
security mechanisms in order to prevent security attacks, and
among all the security services mentioned, this paper only
concentrates on designing a new authentication protocol which
can be categorized as Peer Entity Authentication security
service. The proposed protocol will utilize the Authentication
Exchange, Digital Signature, and Encipherment security
mechanism. Furthermore, figure 2 is also the basic form for
the new authentication protocol design, but it will be altered to
achieve the objective of assurance in the identity of
communicating entities. More information about the proposed
protocol can be found in section V of this paper.
III. AUTHENTICATION PROTOCOLS
Authentication is important in order to maintain the
integrity of an entity. On the other hand, integrity is essential
in determining that an entity is really who it claims to be.
Moreover, authentication can be used to ensure that an entity
has full authority and accountability over its data. Therefore,
in maintaining an entitys integrity, many authentication
protocols have been introduced. Protocols such as Kerberos,
SSO and OpenID are some of the examples that are widely
used. Most of these authentication protocols required a
dedicated access to a server for either validation process or to
acquire digital certificates, tickets or tokens. On the other
hand, users who utilize OpenID needs to register to an OpenID
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 75-81
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)

77

identifier with an identity provider in order to sign in to the
websites which employ the OpenID authentication.
Some of these authentication protocols are not
suitable for ubiquitous computing environment. For example,
Kerberos, which is a computer network authentication
protocol that consists of a centralized Key distribution Centre
(KDC) which actually is two logically separate parts
comprises of Authentication Server and Ticket Granting
Server as mentioned in [8]. Although centralization is good in
managing multiple users at one time, but it still has a
disadvantage because if the KDC server is compromised or
the service is down, users may not be able to authenticate
themselves. Accordingly, KDC can be the single-point-of-
failure which is the major drawback for Kerberos protocol as
argued in [9].

Another current authentication protocol which is
widely used is the Single sign-on (SSO). According to [10], it
enables a user to gain access to several systems or
applications by a single login. A user does not have to
reiterate the login process to every application that he or she
is trying to access. This means that, SSO is a centralized
authentication system that has access control to multiple
applications that are unified under it. In SSO, once the user
logged in, he or she can access other applications too. This
makes the authentication system to be highly vital. If the
authentication system availability is disrupted then the users
can face the denial of access to the other applications that
employed the SSO authentication system. This is a major
drawback of the SSO authentication system as shown in [11].

Nonetheless, there is still authentication protocol
which implements decentralized system. OpenID enables
users to choose their preferable identity providers in order to
create accounts. The users are able to sign in to any
application that acknowledges the authentication by using
those accounts. Nevertheless, that is also the downside of it.
As OpenID account can only be used to sign in to websites
that acknowledges it. Although OpenId is already widely
being implemented there are more websites which do not, so
relying on it for integrity conformation is not convenient.
Moreover, it is also susceptible to phishing attacks. The
phishing attack can be in such a way that a user account can
be tampered with when a user is swindled into believing that
he or she is filling credentials into the real identity provider
authentication page whereas it is actually a fake
authentication page. Upon giving his or her credentials into
this fake site, the malicious person that is controlling it can
use those credentials to access the users account and then log
into any application that associate with that particular users
OpenID as mentioned in [12].

Recently, there is a different approached in
authentication, which specialized in securing communications
between devices by using the knowledge of their radio
environment as a proof of physical proximity. This new
authentication protocol is called Amigo. According to
Varshavsky et al. in [13], Amigo is a technique which
extends the Diffie-Hellman key exchange with verification of
device co-location. This protocol can ensure that the key is
exchanged with the right device. In doing this, a devices
location or specifically its radio environment will be verified
whether it is in the same proximity or not. This technique is
interesting as it involves comparing the proximity of the
devices. The only downside of this technique is that the
interacting devices would only know the proximity of one
another and not their exact identities. This is not enough if a
user wants to execute trust in communications.

Based on the related authentication protocols
features mentioned above, there are many attributes that need
to be improved in order to suit the ubiquitous computing
volatile and decentralized environment.
IV. JUSTIFICATIONS AND REQUIREMENTS FOR
THE PROPOSED PROTOCOL
In Section III, we have presented the current
protocols that can be used in the ubiquitous computing
applications. From the discussions, it can be deduced that
there are three issues with these protocols that need to be
addressed by the proposed protocol. The first one is the issue
of centralization. The second one is the issue of accessibility
(need Internet in order to use the protocol). The third and the
last one is the issue of trust.
According to Colouris [14], due to its volatile
environment as compared to the existing computing
environment, ubiquitous computing needs a special
authentication and authorization protocol. In a volatile
environment, heterogeneous devices may come in contact with
each other spontaneously and could start interacting with each
other and also may suddenly leave from the established
network connections [15]. The volatility and dynamicity of
mobile devices in a ubiquitous computing environment could
contribute to the fluctuating usage environment such as users
location, devices context and users activity that varies
randomly. As a result, the current rigid and centralized
authentication protocol that relied on certification authorities
in order to confirm the identity of the entity involved will not
be sufficient for a volatile environment such as smart
environment as demonstrated by Nixon [16]. In this paper, we
have identified three requirements for the proposed
authentication protocol (see Figure 3). These requirements are
seen as vital in order for the proposed protocol to be accepted
by the users.
A. Decentralization
The decentralization of an authentication protocol is
actually referring to the distribution of the authentication
process to the respective devices. This is opposite to the
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 75-81
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)

78

current practice which provides a centralized authentication
protocol that relied on hierarchies of certification authorities
that provide certificates and confirmations of the respective
owners using a dedicated server. The decentralization of the
authentication process in the proposed protocol is achieved
through multiple trusted agreements among the devices
involved.

Figure 3.Requirements for the New Authentication Protocol
The act of using multiple trusted devices to verify the
identity of an entity will eliminate the need for constant access
to an online dedicated server for authentication process. This
would be useful in the case of interruption in the network
access whereby the respective devices cannot get access to the
Internet. In the proposed protocol, the only network
connection needed in order for the authentication process to
take place is the connection between the communicating
devices. As pointed out in [17], processors are now being
embedded into common everyday objects and surrounding
infrastructures, for that reason it is not efficient to provide an
authentication process that requires constant online access to a
dedicated server and certificate authority. Besides that, there
are many questions regarding public key infrastructure
practicality as mentioned by Creese et al. in [18], who
questioned the Certification Authority practicality which
needs constant online access.
Decentralization of authentication process can also
eliminate the single-point-of-failure problem. A centralized
authentication protocol does have high chances of having a
single-point-of failure due to high dependency in dedicated
servers for validation processes. If this single-point-of-failure
risk can be minimized or can be avoided, then the usability
and availability of an authentication system can be improved.
B. Trust
Trust can be said as the second requirements for the
new authentication protocol. Coulouris et al. in [14], has stated
that the devices trust needs to be lowered in order to
spontaneously interact. When this situation happens, they will
be short of of knowledge of each other and a trusted third
party will be needed to ensure the identity of one another. In
addition, Varshavsky et al. in [13] had mentioned about
mobile devices that have wireless capabilities may
spontaneously interact with one another whenever they come
in close proximity, so this sort of communications are risky as
the trust among them are not priory established. Eventually,
this lack of trust may give opportunity to malicious attacker to
be connected with any devices in presence. Hence, an
approach to solve this problem is proposed by adopting a trust
level mechanism where user can choose to set their devices
trust level accordingly for authentication process.
In a normal scenario of ubiquitous computing
environment, some users may already know each other
beforehand and some may not. So each user may want to set
different trust level for different situation or people. As,
suggested by Westin in [19], there are three types of
respondents, namely; privacy fundamentalists, privacy
pragmatists, and privacy unconcerned. Based on that
argument, users should be given a choice to choose their
privacy settings.
C. Seamless
The third requirement for the new authentication
protocol can be said as making the interaction in the
authentication process to be seamless to the users. This is
because; one of ubiquitous computing characteristics as
emphasized by Weiser [2] is that the technology should blend
into the surroundings to the extent that the people are not
aware of it and do not need to know how it is done. This
concurs with Stringer et al [20] and also Bardram and Friday
[15], who acknowledged that ubiquitous computing is about
disappearing computing application which blends into objects
and surroundings. As, stated by Langheinrich [21] processors
and sensors are being embedded into almost everything.
Because of that, the form of interaction of ubiquitous
applications and devices are beyond the traditional computing
interaction where it is done via sensors that sense an entity
presence, sound or gesture implicitly. Consequently, it is
appropriate to design an authentication protocol that suits to
this characteristic of ubiquitous computing, which involve
authenticating entities without having the entity to interfere in
the process and is unobtrusive.
V. THE PROPOSED AUTHENTICATION
PROTOCOL

Since the current authentication protocols are more
suitable being implemented by the rigid computing
infrastructure which is centralized and required constant
access to a dedicated server, the new design for the proposed
authentication protocol will be developed to be suitable with
the volatile environment of ubiquitous computing. Figure 4
will explain more about the multiple trusted devices
authentication protocol for Ubiquitous Computing application.
REQUIREMENTS
Decentralization
Seamless
Trust
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 75-81
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)

79

In order to understand the proposed authentication protocol, a
scenario is used.
In the scenario, there are 3 persons A (P
A
), B (P
B
) and
C (P
C
) who each has a smart phone device A (D
A
), B (D
B
) and
C (D
C
). P
A
and P
B
had just met but P
C
is a mutual friend of P
A

and P
B
. P
B
has a collection of interesting pictures that he had
taken while visiting an art gallery in Paris. P
A
on the other
hand, really wants to have those pictures so he decided to copy
it from P
B
. P
B
do not mind to share it with P
A
, and all P
A
has to
do is to access P
B
s device. In order to do so, he must have the
authority to access device D
B
. As P
A
and P
B
have just met,
therefore, P
A
must first register with device D
B
. Whereas,
since P
C
is a mutual friend to both P
A
and P
B
, therefore, it is
assumed that he has already registered to both of his friends
devices. Therefore, he will not have a problem in accessing
both of his friends devices. Hence, Figure 4 depicts how
device D
A
will be granted the access to device D
B
.















Figure 4. The proposed authentication protocol
First of all, in step 1 device D
A
must request
permission to access device D
B
. In doing so, D
A
must show or
send its ID to D
B
so that D
B
is able to check in its registry
whether D
A
is already in it or not. This ID is actually a random
value that D
A
can generate and renew whenever it is needed.
This ID is not permanent; nevertheless each time it is being
renewed, the old ID which is in the other devices registry will
become invalid. As a result, a device must go through the
registration process each time the ID is being renewed.
In step 2, D
B
will check (by using D
A
s ID just now)
whether it has D
A
s ID recorded in its registry or not. This
action will result in two conditions either D
A
s ID is found or
not found. So, if it is found, it will proceed to step 3a if not it
will proceed to step 3b. Then, in step 3a, when it is confirmed
that D
A
has already registered in the registry, D
B
will proceed
to request D
A
to authenticate itself by giving its Identity Key.
This Identity key is also a sequence of random value and also
can be generated and renew whenever it is needed. Apart from
that, this Identity key will be conveyed partially, see figure 5.
Hence, only a portion of the whole Identity key and its
metadata of Identity key sequence will be sent. This is to
avoid malicious device that might be eavesdropping. Although
the Identity Key will be partially revealed, D
B
will not have
any problem to verify it as it will compare the sequence of D
A

Identity Key being sent with the one that is already in its
registry. Furthermore, in step 4a, D
B
will also seek other
device to participate in validating the Identity Key.
Nonetheless, all information during these transmissions will
be encrypted using the existing cryptographic algorithm.



q 1 w 2 e r t y 5 7 z 0





Figure 5. Identity Key
During step 4a, apart from finding D
A
s ID Key in the
registry and then validates it, there will also be security level
settings depicted in figure 6 that D
B
has to set for D
A
. This
security level setting will determine how the validation
process will take place (in this phase, P
B
is free to set different
security level for different device that P
B
encounters). There
are currently three levels of security settings in this protocol. If
D
A
is set to Level 1 then, after D
B
has validate its Identity Key
it can access D
B
right away. But if it is set to level 2, then after
D
B
has validate its Identity Key, D
B
will proceed to ask other
device which may be nearby to check for D
A
s credentials.
However, if D
A
is set to Level 3, then its credentials will be
validated by more than 1 nearby devices.









D
B
s registry/database
Device ID ID Key
A a b c 1 2 3 z q 1 w 2 e r 3 t y 5 7 z 0
C 7 9 j l s 3 8 1 2 3 0 9 8 w 0 w x i d 0
... ... ...
1) D
A
Requests
access permission
from D
B
and send
D
A
s ID

2) D
B
Check
whether D
A
s ID
exists in its record
or not
If it exists, D
B
request
D
A
to authenticate
itself (proceed to step
3a)
A
(P
A
)
Device A
(D
A
)



Device B
(D
B
)
1 2
3a
3b
4a
4b
5
6
B
(P
B
)
4a) D
B
request other
devices to validate
D
A
s Identity key
4b) Device D
A

credentials will be
updated in D
B
registry
5) D
B
allow D
A
to
access its
application
If it do not exist, D
B

request D
A
to register
(proceed to step 3b)
6) D
A
update its
own registry
3a) D
B
request D
A
to
authenticate itself
D
A
send Identity key for
authentication
3b) D
B
request D
A
to register
D
A
go through registration
process
5) D
B
grant its Identity
Key and authorization
ID to D
A

security level settings
Level 1 Permit access right away
Level 2
Check with a nearby
device for D
A
s
credentials
Level 3
Check with more than 1
nearby devices for D
A
s
credentials
Device A
(D
A
)



ID: a b c 1 2 3 z
ID Key: q 1 w 2 e r 3 t y 5 7 z 0
Validation process
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 75-81
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)

80







Figure 6. Security level settings
Nevertheless, steps 3a and 4a deal with a situation
where D
A
has already registered in D
B
s registry. If it is not
then it will continue to step 3b, where it is prompt to register
first in order to access D
B
s application. Here, D
A
will go
through the registration process where it will provide its ID as
well as its Identity Key. After that, in step 4b, D
A
s credentials
will be updated inside D
B
s registry. Next, after step 3a and 4a
or 3b and 4b have been completed, step 5 will take place,
where D
B
is ready to give permission and authorization for D
A

to access its application. D
B
will also grant its Identity Key
and its authorization ID to D
A
.
Finally, in step 6, after D
A
has accepted D
B
s ID Key and
authorization ID, it will update them in its database/registry.
Then it will proceed to use the authorization ID to access the
desired application on device D
B
.

VI. CONCLUSIONS

In this paper, we have discussed a number of
possible authentication protocols to be used in the ubiquitous
application environment. From the discussion, we have
shown that there is no protocol that can really suit the needs
of the application running in such environment. Therefore, in
this paper, we are proposing a new authentication protocol
which can satisfy the needs of the applications running in a
ubiquitous environment. The proposed protocol uses multiple
trusted devices and this has resulted in the decentralization of
the authentication process in order to suit the volatile and
dynamic environment of Ubiquitous Computing. It is hope
that in the near future, the proposed protocol can be tested in
a test bed environment.

REFERENCES

1. R. Want, An introduction to ubiquitous computing,
Ubiquitous Computing Fundamentals, J. Krumm, Ed.
Redmond,Washington, U.S.A:CRC Press, ch.1, pp. 2-27.
2. M. Weiser, The computer for the 21st century, Mobile
Computing and Communications Review, New York,
NY, USA, pp. 3(3):311, (1999).
3. S. Yahya, E. A. Ahmad, K. Abd Jalil, The definition
and characteristics of ubiquitous learning: A discussion,
International Journal of Education and Development
using Information and Communication Technology, pp.
117-127, (2010).
4. B. Guttman and E. A. Roback, "Introduction, An
Introduction to Computer Security: The NIST
Handbook," Gaithersburg, MD: NIST special Publication
800-12, ch.1, pp.5, (1995).
5. Standards for Security Categorization of Federal
Information and Information Systems, Federal
Information processing Standards Publication.
Gaithersburg, MD, p. 2, (2004).
6. Security Architecture for Open Systems Interconnection
for CCITT Applications, Recommendation X.800.
Geneva, p.8-9, (1991).
7. W. Stallings, A Model for Network Security,
Cryptography and Network Security 5
th
ed. Prentice
Hall, Upper Saddle River, NY: ch. 1, pp. 25-26, (2011).
8. J. Garman, Pieces of the puzzle, Kerberos the definitive
guide, Sebastopol, CA: OReilly & Associates, Inc, ch.
2, pp. 17-23, (2010).
9. J. Garman, Security, Kerberos the definitive guide,
Sebastopol, CA: OReilly & Associates, Inc, ch. 6, pp.
100-125, (2010).
10. B. Ballad, T. Ballad, E. K. Banks, Single Sign-on
(SSO), access control, authentication, and public key
infrastructure, Sudbury, MA : Jones & Bartlett
Learnings, ch. 10, pp. 229-231, (2011).
11. J. Pyles, Getting started with Microsoft Office
SharePoint Server, McTs, Microsoft Office Sharepoint
Server 2007 Configuration Study Guide. Indianapolis,
Indiana: Wiley Publishing, Inc., ch. 1, pp. 14, (2008).
12. R. U. Rehman, OpenID Protocol: Miscellaneous Topics,
Get Ready for OpenID, 1
st
ed. Conformix Technologies
Inc., ch. 8, pp. 205-207, (2008).
13. A. Varshavsky, A. Scannell, A. E. Lara LaMarca,
Amigo: Proximity-based Authentication of Mobile
Devices, Proc. 2007: The 9th international conference
on Ubiquitous computing, Berlin, Heidelberg, pp. 253-
270, (2007).
14. G. Coulouris, Mobile and Ubiquitous Computing,
distributed systems, Concepts and Design. 4th ed.,
Addison-Wesley, Reading, MA : Addison-Wesley, ch.
16, pp. 683-704, (2005).
15. J. Bardram, A. Friday, Ubiquitous Computing Systems,
Ubiquitous Computing Fundamentals, J. Krumm, Ed.
Redmond, Washington, U.S.A: CRC Press, ch. 2, pp. 39-
41, (2010).
16. P. Nixon, W. Wagealla, C. English, and S. Terzis,
Privacy, Security, and Trust Issues in Smart
Environments, In Smart Environments: Technology,
Protocols and Applications. Wiley, London, UK, pp.
220-240. ISBN 978-0-471-54448-7, (2004).
17. Middleware Architecture for Ambient Intelligence in the
Networked Home, Handbook of Ambient Intelligence
and Smart Environments. Springer-Verlag US, p. 1139,
(2010).
18. S. Creese, M. Goldsmith, B. Roscoe, I. Zakiuddin,
Authentication for Pervasive Computing. Security in
pervasive computing, First International Conference,
Boppard, Germany, pp. 117-129, (2003).
D
A
is authenticated and given authorization
to access application from D
B

True False
Flag D
A
for future reminder
in case D
A
is an imposter
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 75-81
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)

81

19. A. F. Westin, Privacy and Freedom, New York, NY,
USA: Atheneum, (1967).
20. M. Stringer, et al Situating Ubiquitous Computing in
Everyday Life: Some Useful Strategies [Online].
Available: http://www.informatics.sussex.ac.uk/
research/groups/interact/publications/stringer_ubicomp0
5.pdf.
21. M. Langheinrich, Privacy by Design - Principles of
Privacy- Aware Ubiquitous Systems, Proc of the 3rd
international conference on Ubiquitous Computing,
London, UK, (2001).
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 82-88
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)

82

An Image Encryption Method: SD-Advanced Image
Encryption Standard: SD-AIES

Somdip Dey
St. Xaviers College
[Autonomous]
Kolkata, India
E-mail:
somdipdey@ieee.org
somdipdey@acm.org






ABSTRACT
The security of digital information in modern times is one of
the most important factors to keep in mind. For this reason, in
this paper, the author has proposed a new standard method of
image encryption. The proposed method consists of 4
different stages: 1) First, a number is generated from the
password and each pixel of the image is converted to its
equivalent eight binary number, and in that eight bit number,
the number of bits, which are equal to the length of the
number generated from the password, are rotated and
reversed; 2) In second stage, extended hill cipher technique is
applied by using involutory matrix, which is generated by
same password used in second stage of encryption to make it
more secure; 3) In third stage, generalized modified Vernam
Cipher with feedback mechanism is used on the file to create
the next level of encryption; 4) Finally in fourth stage, the
whole image file is randomized multiple number of times
using modified MSA randomization encryption technique and
the randomization is dependent on another number, which is
generated from the password provided for encryption method.
SD-AIES is an upgraded version of SD-AEI Image
Encryption Technique. The proposed method, SD-AIES is
tested on different image files and the results were far more
than satisfactory.
KEYWORDS
SD-EI, SD-AEI, image encryption, bit reversal, bit
manipulation, bit rotation, hill cipher, vernam cipher,
randomization.
1. INTRODUCTION
In todays world, keeping the digital information safe from
being misused, is one of the most important criteria. This
issue gave rise to a new branch in computer science, named
Information Security. Although new methods are introduced
every day to keep the data secure, but computer hackers and
un-authorized persons are always trying to break those
cryptographic methods or protocols to fetch the sensitive
beneficial information from those data. For this reason,
computer scientist and cryptographers are trying very hard to
come up with permanent solutions to this problem.
Cryptography can be basically classified into two types:
1) Symmetric Key Cryptography
2) Public Key Cryptography
In Symmetric Key Cryptography [16], only one key is used
for encryption purpose and the same key is used for
decryption purpose as well. Whereas, in Public Key
Cryptography [16], one key is used for encryption and another
publicly generated key is used for the decryption purpose. In
symmetric key, it is easier for the whole process because only
one key is needed for both encryption and decryption.
Although today, public key cryptography such as RSA [14] or
Elliptical Curve Cryptography [15] is more popular because
of its high security, but still these methods are also susceptible
to attack like brute force key search attack [16]. The
proposed method, SD-AIES, is a type of symmetric key
cryptographic method, which is itself a combination of four
different encryption modules.
SD-AIES method is devised by Somdip Dey [5] [6] [9] [10]
[11] [12] [13], and it is itself a successor and upgraded version
of SD-AEI [6] image encryption technique. The four different
encryption modules, which make up SD-AIES Cryptographic
methods, are as follows:
1) Modified Bits Rotation and Reversal Technique for
Image Encryption
2) Extended Hill Cipher Technique for Image
Encryption
3) Generalized Modified Vernam Cipher for File
Encryption
4) Modified MSA Randomization for File Encryption
The aforementioned methods will be discussed in the next
section, i.e. in The Methods in SD-AIES. All the
cryptographic modules, used in SD-AIES method, use the
same password (key) for both encryption and decryption (as
in case of symmetric key cryptography). Although there is a
rising issue of strong security in between symmetric key
cryptography and public key cryptography, but SD-AIES is
very strong cryptographic method indeed because of the use
of modified Vernam Cipher with feedback mechanism. It has
already been proved by scientists that the use of one-time
padding Vernam Cipher is itself unbreakable if and only if the
key chosen for encryption is truly random in nature. The
combination of both bit and byte manipulation along with
modified Vernam Cipher makes the SD-AIES method truly
unique and strong.
The differences between SD-AEI [6] and SD-AIES methods
are that the later one contains one extra encryption module,
which is the modified Vernam Cipher with feedback
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 82-88
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)

83

mechanism, and the Bits rotation and Reversal Technique is
modified to provide better security.
2. THE METHODS IN SD-AIES
Before we discuss the four methods, which make the SD-
AIES Encryption Technique, we need to generate a number
from the password, which will be used to randomize the file
structure using the modified MSA Randomization module.
2.1 Generation of a Number from the Key
In this step, we generate a number from the password
(symmetric key) and use it later for the randomization
method, which is used to encrypt the image file. The number
generated from the password is case sensitive and depends on
each byte (character) of the password and is subject to change
if there is a slightest change in the password.
If [P
1
P
2
P
3
P
4
..P
len
]

be the password,

where length of the
password ranges from 1,2,3,4..len and len can be
anything.
Then, we first multiply 2
i
, where i is the position of each
byte (character) of the password, to the ASCII vale of the byte
of the password at position i. And keep on doing this until
we have finished this method for all the characters present in
the password. Then we add all the values, which is generated
from the aforementioned step and denote this as N.
Now, if N = [n
1
n
2
n
j
], then we add all the digits of that
number to generate the code (number), i.e. we need to do: n
1
+
n
2
+ n
3
+ n
4
+ .. + n
j
and get the unique number, which is
essential for the encryption method of randomization. We
denote this unique number as Code.
For example: If the password is AbCd, then,
P
1
= A; P
2
= b; P
3
= C
N = 65*2
(1)
+ 98 2
(2)
+ 67*2
(3)
+ 100*2
(4)
= 2658
Code = 2 + 6 + 5 + 8 = 21
2.2 Modified Bits Rotation and Reversal
Technique
In this method, a password is given along with input image.
Value of each pixel of input image is converted into
equivalent eight bit binary number. Now we add the ASCII
Value of each byte of the password and generate a number
from the password. This number is used for the Bits Rotation
and Reversal technique i.e., Number of bits to be rotated to
left and reversed will be decided by the number generated by
adding the ASCII Value of each byte of the password. This
generated number will be then modular operated by 7 to
produce the effective number (N
R
), according to which the
bits will be rotated and reversed. Let N be the number
generated from the password and N
R
(effective number) be the
number of bits to be rotated to left and reversed. The relation
between N and N
R
is represented by equation (1).
N
R
=N mod 7 ------ eq. (1)
,where 7 is the number of iterations required to reverse
entire input byte and N = [n
1
+ n
2
+ n
3
+ n
4
+n
j
], where
n
1
, n
2
, n
j
is the ASCII Value of each byte of the
password.
For example, P
in
(i,j) is the value of a pixel of an input image.
[B
1
B
2
B
3
B
5
B
6
B
7
B
8
] is equivalent eight bit binary
representation of P
in
(i,j).
i.e. P
in
(i,j) [B
1
B
2
B
3
B
5
B
6
B
7
B
8
]
If N
R
=5, five bits of input byte are rotated left to generate
resultant byte as [B
6
B
7
B
8
B
1
B
2
B
3
B
4
B
5
]. After rotation,
rotated five bits i.e. B
1
B
2
B
3
B
4
B
5
, get reversed as B
5
B
4
B
3
B
2
B
1
and hence we get the resultant byte as [B
6
B
7
B
8
B
5
B
4
B
3
B
2
B
1
]. This resultant byte is converted to equivalent decimal
number P
out
(i,j).
i.e. [B
6
B
7
B
8
B
5
B
4
B
3
B
2
B
1
] P
out
(i,j)
,where P
out
(i,j) is the value of output pixel of resultant image.
Since, the weight of each pixel is responsible for its colour,
the change occurred in the weight of each pixel of input image
due to modified Bits Rotation & Reversal generates the
encrypted image. Figure 1 (a, b) shows input and encrypted
images respectively. For the encryption process given
password is SD13, whose N
R
= 6.
Note: - If N=7 or multiple of 7, then N
R
=0. In this condition,
the whole byte of pixel gets reversed.

1(a) 1(b)
Figure 1: (a).Input Image. (b).Encrypted Image for password SD13
2.3 Extended Hill Cipher Technique
This is a new method for encryption of images proposed in
this paper. The basic idea of this method is derived from the
work presented by Saroj Kumar Panigrahy et al [2] and
Bibhudendra Acharya et al [3]. In this work, involutory matrix
is generated by using the algorithm presented in [3].

Algorithm of Extended Hill Cipher technique:
Step 1: An involutory matrix of dimensions mm is
constructed by using the input password.
Step 2: Index value of each row of input image is converted
into x-bit binary number, where x is number of bits present in
binary equivalent of index value of last row of input image.
The resultant x-bit binary number is rearranged in reverse
order. This reversed-x-bit binary number is converted into its
equivalent decimal number. Therefore weight of index value
of each row changes and hence position of all rows of input
image changes. i.e., Positions of all the rows of input image
are rearranged in Bits-Reversed-Order. Similarly, positions of
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 82-88
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)

84

all columns of input image are also rearranged in Bits-
Reversed-Order.
Step 3: Hill Cipher technique is applied onto the Positional
Manipulated image generated from Step 2 to obtain final
encrypted image.
2.4 Generalized Modified Vernam Cipher
The module of modified Vernam Cipher, which is used in this
method is a concept proposed by Nath et al. [4][7]. Nath et al.
in their cryptographic method, called TTJSA [7], has
proposed an advanced form of generalized modified Vernam
Cipher with feedback mechanism. For this reason, even if the
data is slightly changed, the encrypted output generated is
very different from the other outputs.
TTJSA method is a combination of 3 distinct cryptographic
methods, namely, (i) Generalized Modified Vernam Cipher
Method, (ii) MSA method and (iii) NJJSA method. To begin
the method a user has to enter a text-key, which may be at
most 16 characters in length. From the text-key, the
randomization number and the encryption number is
calculated using a method proposed by Nath et al. A minor
change in the text-key will change the randomization number
and the encryption number quite a lot. The method have also
been tested on various types of known text files and have been
found that, even if there is repetition in the input file, the
encrypted file contains no repetition of patterns.
In SD-AIES Image Encryption method we have only used the
modified Vernam Cipher module of TTJSA by Nath et al.
Here, Code represents the randomization number and N
represents the encryption number. All the data in the file are
converted to their equivalent 16 bit binary format and broken
down into blocks.
Algorithm for Modified Vernam Cipher with feedback
mechanism is as follows:
2.4.1 Algorithm of vernamenc(f1,f2):
Step 1: Start vernamenc() function
Step 2: The matrix mat[16][16] is initialized with numbers 0-
255 in row major wise order
Step 3: call function randomization() to
randomize the contents of mat[16][16].
Step 4: Copy the elements of random matrix
mat[16][16] into key[256] (row major wise)
Step 5: pass=1, times3=1, ch1=0
Step 6: Read a block from the input file f1 where number of
characters in the block 256 characters
Step 7: If block size < 256 then goto Step 15
Step 8: copy all the characters of the block into an array
str[256]
Step 9: call function encryption where str[] is passed as
parameter along with the size of the current block
Step 10: if pass=1 then
times=(times+times3*11)%64
pass=pass+1
else if pass=2 then
times=(times+times3*3)%64
pass=pass+1
else if pass=3 then
times=(times+times3*7)%64
pass=pass+1
else if pass=4 then
times=(times+times3*13)%64
pass=pass+1
else if pass=5 then
times=(times+times3*times3)%64
pass=pass+1
else if pass=6 then
times=(times+times3*times3*times3)%64
pass=1
Step 11: call function randomization() with
current value of times
Step 12: copy the elements of mat[16][16] into
key[256]
Step 13: read the next block
Step 14: goto Step 7
Step 15: copy the last block (residual character if any) into
str[]
Step 16: call function encryption() using str[] and the no. of
residual characters
Step 17: Return
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 82-88
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)

85

2.4.2 Algorithm of function encryption(str[],n):
Step 1: Start encryption() function
Step2: ch1=0
Step 3: calculate ch=(str[0]+key[0]+ch1)%256
Step 4: write ch into output file
Step 5: ch1=ch
Step 6: i=1
Step 7: if in then goto Step 13
Step 8: ch=(str[i]+key[i]+ch1)%256
Step 9: write ch into the output file
Step 10: ch1=ch
Step 11: i=i+1
Step 12: goto Step 7
Step 13: Return
2.4.3 Algortihm for function randomization():
The randomization of key matrix is done using the
following function calls:
Step-1: call Function cycling()
Step-2: call Function upshift()
Step-3: call Function downshift()
Step-4: call Function leftshift()
Step-5: call Function rightshift()
Note: Cycling, upshift, downshift, leftshift, rightshift are
matrix operations performed (applied) on the matrix, formed
from the key. The aforementioned methods are the steps
followed in MSA algorithm [2] proposed by Nath et al.
After the execution of modified Vernam Cipher, each block is
written down into the file and further processed by next steps
of the cipher method.
2.5 Modified MSA Randomization
Nath et al. [4][7] proposed a symmetric key method,
where they have used a random key generator for generating
the initial key and that key is used for encrypting the given
source file. MSA method [4] is basically a substitution
method where we take 2 characters from any input file and
then search the corresponding characters from the random key
matrix and store the encrypted data in another file. MSA
method provides us multiple encryptions and multiple
decryptions. The key matrix (16x16) is formed from all
characters (ASCII code 0 to 255) in a random order.
The randomization of key matrix is done using the
following function calls:
Step-1: Function cycling()
Step-2: Function upshift()
Step-3: Function rightshift()
Step-4:Function downshift()
Step-5:Function leftshift()
N.B: Cycling, upshift, downshift, leftshift, rightshift are
matrix operations performed (applied) on the matrix, formed
from the key. The detailed description of the above methods is
given in MSA [4] algorithm.
The above randomization process we apply for n1 times and in
each time we change the sequence of operations to make the
system more random. Once the randomization is complete we
write one complete block in the output key file.
In our method SD-AEI [6] and SD-AIES, we have used the
same concept of randomization but instead of doing the
randomization on the key matrix, we applied the randomization
technique on the whole file after picking up each block from
the image file. Basically, the whole file is broken up into
number of blocks of data and then randomization technique is
applied on each block of data of the image file, then after the
completion of randomization method, each block is written
down in the output file as the final encrypted image file.
Modified Randomization method algorithm, which is followed
in this SD-AIES method is:
Step-1: Function cycling()
Step-2: Function upshift()
Step-3: Function rightshift()
Step-4: Function left_diagonal_randomization()
Step-5: Function cycling() for code number of times
Step-6: Function downshift()
Step-7: Function leftshift()
Step-8: Function right_diagonal_randomization()
3. IMPORTANT FEATURES
In this section we discuss about few special features of the
SD-AIES method, which is as follows:
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 82-88
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)

86

3.1 Effectiveness of Generalized Modified
Vernam Cipher
The use of modified Vernam Cipher is an important thing in
this method. The feedback mechanism in the modified
Vernam Cipher is the game changing method and it makes the
entire cipher system very secure. Even if there is a slight
change in the original file, the entire content of the final
encrypted file will be totally different from the encrypted file
in previous state.
For example, we chose two test cases to show the
effectiveness of modified Vernam Cipher by analyzing the
frequency of the characters of the encrypted file i.e. by
studying the spectral analysis of the encrypted file. The
following table shows the test cases:
TABLE 1: Test Cases for Modified Vernam Cipher
Seriel No. Test Case
1 File containing 2048 bytes of A
(AAAAAAAAAAAAAAAA
AAAAAAAAAAA)
2 File Containing 2047 bytes of A and 1
byte of B
(AAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAA
AAAAAAB)

Fig 2.1 shows the spectral analysis of the test case 1 and Fig
2.2 shows the spectral analysis of the encrypted file of test
case 2.

Fig 2.1: Spectral Analysis of Test Case 1

Fig 2.2: Spectral Analysis of Test Case 2
Thus from the spectral analysis it is evident that there is no
pattern match in between the two test cases and the peaks are
totally different. For this reason, it proved that even if there is
slight change in the original file, the final encrypted file will
be totally different.
3.2 The Difference between Bits Rotation
and Reversal Method Vs Modified Bits
Rotation and Reversal Method
In the Bits Rotation and Reversal Method, which is used in
SD-EI and SD-AEI Image Encryption techniques, was
dependent on the length of the password and thus the bits
were rotated and reversed according to the effective length of
the password. For example, the password is Somdip, then
L
R
(effective length of password)= 6 (according to Bits
Rotation And Reversal technique), and thus 6 bits of every
pixel are rotated and then reversed. Now, majority of the
passwords can be of same length and the resultant of this
method will be the same for all those password. For example,
if the password is 123456 or DeYSYS, the effective
length (L
R
) is still 6, and the result of Bits Rotation and
Reversal Technique will be same for the same password.
So to make the method more effective and secure, we add all
the ASCII Values of each byte in the password to generate the
N and thus find the effective number (N
R
) instead of effective
length (L
R
), which will instead be used for Bits Rotation and
Reversal Technique. For example, if the password are
Somdip, 123456 and DeySYS, then the effective
numbers are 4, 1 and 6 respectively. Thus, the resultant of Bits
Rotation and Reversal technique will be different for all the
three passwords. But still, even in modified Bits Rotation and
Reversal Technique, two passwords are likely to produce the
same effective number (N
R
) because the range of the
effectiveness is in between 0-6 i.e. (because the data range is
in between 1-7), and the summation of ASCII Value may also
lead to same sum. For example, if a password is DeY or
DYe, both will generate same effective number (N
R
). Thus,
this is a drawback of the system, but still this method is better
than the normal Bits Rotation and Reversal Method.
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 82-88
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)

87

4. BLOCK DIAGRAM OF SD-AIES
METHOD
In this section, we provide the block diagram of SD-AIES
method.

Fig 3: Block Diagram of SD-AIES Method
5. RESULTS AND DISCUSSIONS
We provided few results of the proposed SD-AIES method in
the following table.
TABLE 2: Results of SD-AIES
Original File Encrypted File



No
Preview



No
Preview



No
Preview

From the results section it is not possible to know the
effectiveness of the SD-AIES method because the end result
of both SD-AEI and SD-AIES are same i.e. there will be no
preview of the encrypted file, because the internal structure of
the file is already messed up due to encryption methods.
6. CONCLUSION AND FUTURE SCOPE
In this paper, the author proposes a standard method of image
encryption, which first tampers the image and then disrupts
the file structure of the image file. SD-AIES method is very
successful to encrypt the image perfectly to maintain its
security and authentication. The inclusion of modified bits
rotation and reversal technique, and modified Vernam Cipher
along with feedback mechanism, made the system even
stronger than it used to be before. In future, the security of
method can be further enhanced by adding more secure bit
and byte manipulation techniques to the system. And the
author has already started to work on that.
7. ACKNOWLEDGMENTS
Somdip Dey would like to thank the fellow students and his
professors for constant enthusiasm and support. He would
also like to thank Dr. Asoke Nath, founder of Department of
Computer Science, St. Xaviers College [Autonomous],
Kolkata, India, for providing his feedback on the method and
help with the preparation of the project. Somdip Dey would
also like to thank his parents, Sudip Dey and Soma Dey, for
their blessings and constant support, without which the
completion of the project would have not been possible.
8. REFERENCES
[1]. Mitra et. el., A New Image Encryption Approach using
Combinational Permutation Techniques, IJCS, 2006, vol. 1, No
2, pp.127-131.
[2]. Saroj Kumar Panigrahy, Bibhudendra Acharya, Debasish Jena,
Image Encryption Using Self-Invertible Key Matrix of Hill
Cipher Algorithm, 1
st
International Conference on Advances in
Computing, Chikhli, India, 21-22 February 2008.
[3]. Bibhudendra Acharya, Saroj Kumar Panigrahy, Sarat Kumar
Patra, and Ganapati Panda, Image Encryption Using Advanced
Hill Cipher Algorithm, International Journal of Recent Trends
in Engineering, Vol. 1, No. 1, May 2009, pp. 663-667.
[4]. Asoke Nath, Saima Ghosh, Meheboob Alam Mallik,
Symmetric Key Cryptography using Random Key generator,
Proceedings of International conference on security and
management (SAM10 held at Las Vegas, USA Jull 12-15,
2010), P-Vol-2, pp. 239-244 (2010).
[5]. Somdip Dey, SD-EI: A Cryptographic Technique To Encrypt
Images, Proceedings of The International Conference on
Cyber Security, CyberWarfare and Digital Forensic (CyberSec
2012), held at Kuala Lumpur, Malaysia, 2012, pp. 28-32.
[6]. Somdip Dey, SD-AEI: An advanced encryption technique for
images, 2012 IEEE Second International Conference on Digital
Information Processing and Communications (ICDIPC), pp. 69-
74.
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 82-88
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)

88

[7]. Asoke Nath, Trisha Chatterjee, Tamodeep Das, Joyshree Nath,
Shayan Dey, Symmetric key cryptosystem using combined
cryptographic algorithms - Generalized modified Vernam
Cipher method, MSA method and NJJSAA method: TTJSA
algorithm, Proceedings of WICT, 2011 held at Mumbai, 11
th

14
th
Dec, 2011, Pages:1175-1180.
[8]. Somdip Dey, SD-REE: A Cryptographic Method To Exclude
Repetition From a Message, Proceedings of The International
Conference on Informatics & Applications (ICIA 2012),
Malaysia, pp. 182 189.
[9]. Somdip Dey, SD-AREE: A New Modified Caesar Cipher
Cryptographic Method Along with Bit- Manipulation to Exclude
Repetition from a Message to be Encrypted, Journal:
Computing Research Repository - CoRR, vol. abs/1205.4279,
2012.
[10]. Somdip Dey, Joyshree Nath and Asoke Nath. Article: An
Advanced Combined Symmetric Key Cryptographic Method
using Bit Manipulation, Bit Reversal, Modified Caesar Cipher
(SD-REE), DJSA method, TTJSA method: SJA-I
Algorithm. International Journal of Computer
Applications 46(20): 46-53, May 2012. Published by Foundation
of Computer Science, New York, USA.
[11]. Somdip Dey, Joyshree Nath, Asoke Nath,"An Integrated
Symmetric Key Cryptographic Method Amalgamation of
TTJSA Algorithm, Advanced Caesar Cipher Algorithm, Bit
Rotation and Reversal Method: SJA Algorithm", IJMECS,
vol.4, no.5, pp.1-9, 2012.
[12]. Somdip Dey, Kalyan Mondal, Joyshree Nath, Asoke
Nath,"Advanced Steganography Algorithm Using Randomized
Intermediate QR Host Embedded With Any Encrypted Secret
Message: ASA_QR Algorithm", IJMECS, vol.4, no.6, pp.59-67,
2012.
[13]. Somdip Dey, Joyshree Nath, Asoke Nath," Modified Caesar
Cipher method applied on Generalized Modified Vernam Cipher
method with feedback, MSA method and NJJSA method: STJA
Algorithm Proceedings of FCS12, Las Vegas, USA.
[14]. http://en.wikipedia.org/wiki/RSA_(algorithm) [ONLINE]
[15]. http://en.wikipedia.org/wiki/Elliptic_curve_cryptography
[ONLINE]
[16]. Cryptography & Network Security, Behrouz A. Forouzan, Tata
McGraw Hill Book Company.



Measuring Security of Web Services in
Requirement Engineering Phase

Davoud Mougouei
1
, Wan Nurhayati Wan Ab. Rahman
2,
Mohammad Moein Almasi
3

Faculty of Computer Science and Information Technology
Universiti Putra Malaysia, 43400 Serdang,
Selangor, Malaysia
dmougouei@gmail.com
1
, wannur@fsktm.upm.edu.my
2
, moein.almasi@outlook.com
3



ABSTRACT

Addressing security in early stages of web service
development has always been a major engineering trend.
However, to assure security of web services it is required
to perform security evaluation in a rigorous and tangible
manner. The results of such an evaluation if performed in
early stages of the development process can be used to
improve the quality of the target web service. On the
other hand, it is impossible to remove all of the security
faults during the security analysis of web services. As a
result, absolute security is never possible to achieve and a
security failure may occur during the execution of web
service. To avoid security failures, a measurable level of
fault tolerance is required to be achieved through partial
satisfaction of security goals. Thus any proposed
measurement technique must care for this partiality. Even
though there are some approaches toward assessing the
security of web services but still there is no precise model
for evaluation of security goal satisfaction specifically
during the requirement engineering phase. This paper
introduces a Security Measurement Model (SMM) for
evaluating the Degree of Security (DS) in security
requirements of web services by taking into consideration
partial satisfaction of security goals. The proposed model
evaluates overall security of the target service through
measuring the security in Security Requirement Model
(SRM) of the service. The proposed SMM also takes into
account cost, technical ability, impact and flexibility as
the key features of security evaluation.

KEYWORDS

Vulnerability; Web Service; Threat; Security Fault; Web
Service Security
1 INTRODUCTION

Security has always been a vital concern in
development of web services. However, current
software development methods are almost neglectful
of engineering of security into the system analysis
and particularly requirement elicitation process [1].
Even though, some researchers attempted to
integrate security analysis into the requirement
phase, it is not clearly specified yet how to
accomplish this spontaneously during the
requirements engineering process [2]. On one hand,
it is not always possible to fully mitigate the
vulnerabilities or threats within the service and on
the other hand, existence of faults in the service may
ultimately lead to a security failure. In order to avoid
security failure of the target web service requires
being flexible and tolerant in the presence of
security faults [3]. To facilitate this it is needed to
care for fault tolerance in security requirements of
the target web service. In the paper [4], we have
presented a goal-based approach to address fault
tolerance into the security requirements of the
security critical systems. The method contributes to
a flexible model for requirements of security
important systems. Based on this model we have
constructed a security requirement model for web
services. Our intend in the current work is to help
security analyzers assess Overall Degree of Security
(ODS) in the target service by explicitly factoring
the security factors such as impact, technical ability,
cost and flexibility of the security countermeasures
introduced by security requirement model of the
target web service. For this reason, we divide the
applied security mitigations into four categories as
89
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2):89-98
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)


described in [4] to support evaluation of the degree
of security of security goals with respect to the cost,
flexibility, technical ability and impact of the
security goals as countermeasures to security threats.
Hence, a SMM has been introduced to address
assessment of security in security requirements of
web services. Integration of it into the SRM makes
the proposed models amenable to analysis and
alteration at the requirement engineering time. In our
previous work [4] we have introduced some
mitigation techniques to mitigate security faults and
lastly make a flexible model for a given system
specification. In this paper we also care for
measuring partial satisfaction of security goals we
have proposed in [4] to address fault tolerance in the
security specification of the system. This paper has
three main contributions. Firstly, it presents a model
for evaluation of degree of security in security
requirements of web service. Secondly, it introduces
a method for calculation of degree of security for all
of the security goals and consequently for the SRM
of the web service by explicitly factoring the
security goal attributes and also characteristics of
logical model of SRM [4] into the evaluation
process.
The validity of our approach is demonstrated
through applying it to SRM of a typical online
money transfer service (MTS), a service that offers
money transfer to the beneficiary accounts. The
remainder of the paper is organized as follows.
Section 2 discusses related works. Section 3 presents
our measurement model and introduces MTS as our
running application. Section 4 describes the DS
attributes and section 5 gives the details of
evaluating the security for MTS. Finally, in Section
6, we conclude this paper and discuss future work.

2 RELATED WORKS

With development and utilization of web services,
many researchers concern about the security of web
services which leads to different evaluation models
and frameworks from different perspectives. In [5]
Zhang has proposed integrated security framework
based on authentication, authorization, integrity
and confidentiality factors besides integration of
these mechanism to have more secure web services.
Some researchers put forward the improvement of
web service technologies, for instance, paper [6]
focus on enhancing security of web services WSDL
file and they proposed model for encrypting WSDL
document to handle its security problems.
Moreover, Li J iang et al. in their work [7] state that
mainly research in the area of web service concern
on the security of web service rather than evaluation
of its secureness, they proposed evaluation model
which is based on STRIDE model that determine
whether or not web service is secure. Gonzalez et
al., in paper [8], offered sets of metrics to assess e-
commerce website requirements in terms of security
and usability by means of human computer
interaction, their proposed evaluation model is
based on GQM approach. Furthermore, in [9],
author has proposed a secure measurement model
that introduces different categories of security
measurements and their corresponding factors in
order to detect potential security defects. Wei Fu et
al., in their work [10], developed web service
security analysis tools that look through the source
code and generates the dependency graph and
through that it identifies unsafe methods and the
spread of them which helps to make these methods
invisible to outer users after web service being
published.
Authors of [11] have proposed client transparent
fault tolerance model for web serve which will
recognize server errors and redirect requests to
reserved backup server in order to reduce the
service failures. Santos at el. [12] proposed fault
tolerance infrastructure that adds an extra layer
acting as proxy between client requests and service
providers response to ensure client transparent
faults tolerance. In paper [13] also the author has
cared for uncertainty factors in the environment
through partial satisfaction of goals in self-adaptive
systems. Web services are required to operate with
high level of security and dependability. Several
studies proposed web service strategies in order
address this issue. Merideth et al. [14] introduced
Thema which is Byzantine Fault-Tolerance
middleware system in order to execute the
Byzantine Fault-Tolerance by capturing all requests
and responses.

90
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2):89-98
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)



Figure 1. OBS conceptual model in terms of use cases and misuse cases.
3 THE PROPOSED MEASUREMENT MODEL

3.1 Running Application

To illustrate the validity of our approach, we applied
it to a case study provided in [15] describing an
Online Banking System (OBS) as a security critical
system (SCS). We have focused on the Money
Transfer Service (MTS) in OBS.
OBS provides some standard banking services
including money transfer service over the internet.
The bank accounts are a tempting target for
hackers. For this reason, MTS transactions must be
protected to keep financial losses to a minimum.
The availability of MTS is as important as the
confidentiality and integrity. The MTS also has a
server which should be protected from any possible
misuse. In addition to that, an attacker may exploit
the MTS internal communication network to
threaten the transactions.
MTS in addition should prevent unauthorized online
access to the service. Thus, it supports user
authentication by checking the user name and
password. However, the attacker still can guess
either user name or password but it is supposed to
be difficult. MTS must offer reasonable assurance
that their customers accounts are secure. The main
threat that concerns MTS is that an attacker will
transfer money out of customers accounts.
MTS as a web service relies on security concepts to
work properly. Therefore, 1) maintaining integrity,
2) achieving a high level of confidentiality and 3)
maintaining OBS available to the users, as the key
features of security [3] are extremely important.

3.2 Methodology

Our proposed approach contains several steps. For a
given security requirement model, first of all the
security goals and requirements will be categorized
in terms of the mitigation technique they are refined
by. Afterward, the DS will be calculated for each
security goal (requirement) based on its
corresponding category attributes and formula. Note
that all goals and requirements would be elicited
from SRM of the target web service. The SRM is
formally described with respect to the existing
service requirement artifacts like attack trees [16] or
use case and misuse case [17] diagrams. SRM is a
tree-like model with AND-OR relations among
security goals. Therefore, after calculating the
degree of security for all of the security
requirements so called leaves, the calculation will be
propagated to the higher levels of the SRM based on
the logical relation among security goals and also
considering the mitigation technique the goal has
refined through. In the last step, the Overall degree
of security of the SRM will be calculated for the
target web service.
91
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2):89-98
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)


3.3 Model Description

The SRM is supposed to reflect the security goals of
the web service based on the use case model of the
system illustrated in figure 1. Every security goal in
SRM is refined through application of one of the
four mitigation techniques mentioned in [18]. Based
on the mitigation technique used to refine the goal,
calculation of DS and attributes to be considered for
this calculation may differ. On the other hand, some
attributes should be taken into consideration to
assess the goal either individually or as a part of
SRM with respect to the category of mitigation it
belongs to. These attributes include technical ability,
impact, cost-of-implementation and flexibility of
goal in the presence of security faults. TABLE 1
describes the proposed SMM in terms of these
categories and attributes.

Table 1. Categories and Attributes in Proposed SMM

Mitigation Technique Attributes
Add low level sub
goals (ALG)
Cost of implementation (C)
Technical Ability of goal (T)
Impact of goal (I)
Flexibility of goal (F)
Relaxation (RLX)
Sum of DSs of descendants (S)
Production of DSs of
descendants (P)
Flexibility of goal (F)
Add High Level Goal
(AHG)
Sum of DSs of descendants (S)
Production of DSs of
descendants (M)
Flexibility of goal (F)
No refinement (NF) -

For each goal in the SRM the DS values will be
calculated based on the equation (1). Finally, the
Overall Degree of Security (ODS) for the SRM of
target web service will be calculated based on
equation (2).

DS: Degree of security

: Technical ability of goal i

: Impact of goal i

: Cost of implementation of goal i

: Flexibility of goal i




0.5 (0.01

1
1

100
0

100
0

1

,
0.5 (

+

) ,

0.5 (

) ,



(1)

4 SECURITYATTRIBUTES

In this section, the attributes taken into account for
calculation of DS for each goal and also for the ODS
will be discussed.

4.1 Technical ability (T)

Technical ability as one of the attributes for
calculation of DS reveals the ease of implementing
the goal in the following stages of the development
in terms of complexity of the goal and existence of
professions in the development team. In fact the
Technical ability can be calculated using equation
(3). Technical complexity of the implementation in
the equation (3) also can be calculated based on any
acceptable method for calculation the program
complexity. However, since it is required to
calculate the complexity in the requirement
engineering stage for our proposed measurement
model, using the techniques like Albresht [18]
which are capable of calculationg the complexity in
the early stages of development are adviced.
Nonetheless any method or technique capable of
=

=1

=1
, 0 <

100
ODS: Overall Degree of security

: Degree of security of goal i

: Severity of the threat which is mitigated


by goal i


(2)
92
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2):89-98
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)


doing this calculation based on the SRM is
applicable. Technical ability as given in equation
(3) is a number between zero and one.

: Technical ability of Goal i

: Technical Complexity of Implementation of


goal i

=
1

, 1 <

100
(3)

4.2 Impact (I)

Impact is another attribute for calculating the DS of
security goals in requirement model of the web
service. This attribute reflects the efficiency of the
mitigation constructed by the security goal. On the
other words, it describes to which extent the security
goal is able to mitigate the corresponding security
threat. This parameter takes a value between zero
and one hundred which will be specified by the
security expert. Security expert can either be a
member of the development team or an external
security expert.

4.3 Cost of Implementation (C)

Cost is one of the main factors for evaluation the
security requirements. Sometimes a security
requirement can make a great contribution to the
security of the service but the cost of
implementation does not allow the development
team to implement it. On one hand cost of
development is one of the key features of web
service market. So less development cost contributes
to more profit and keeping abreast of the technology
changes in the web market. On the other hand the
extent to which the security is critical for a web
service specifies the amount of budget which can be
spent on security enhancements. The value for cost
will be specified by development team. This can be
used for calculation of DSs.

4.4 Flexibility (F)

Since it is not always possible to completely satisfy
the goals, sometimes we need to accept the partial
goal satisfaction [12]. We address this partiality in
terms of the relaxed attributes in RELAX
statements. Accordingly, we benefit from fuzzy
temporal logic as a semantic for our applied syntax
to take the security faults into account during the RE
process [18]. This way we can integrate the fault
tolerance into the target systems SRM. If partial
satisfaction of the security goal is acceptable, we
RELAX the goal. We apply this technique when
threats can be partially mitigated. In this case, we
add flexibility by explicitly factoring the security
faults into the SRM. This contributes to a fault
tolerant model for the target system which can resist
in the presence of unavoidable security faults.
According to the proposed model, we calculate the
flexibility for each goal based on the category it
belongs to. Basically, flexibility of the goal depends
on the mitigation technique it has been derived by.
Calculations of flexibility for all of the categories
are given in equation (4). As it is depicted in
equation (4), measuring the DS at the proposed
SMM , takes the fuzziness of RELAX statements
into account by incorporating the membership
function of corresponding fuzzy set into account for
calculation of flexibility of the goal. This will be
applied only on goals belonging to the RLX set.

: Flexibility of goal i
: Goal i

=
0.2 ,


( (

)
0.1 ,
,



= {| }

(

) =

)
=


In the presence of security faults


| (

) = 1,

) =



() =
| () , (0) = 1

= { (

, (

))|(

) [0,1] , }

(4)
93
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2):89-98
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)



Figure 2. SRM for MTS. (J unction points represent AND-relations while their absence means OR-relation)
A fuzzy set is a set whose elements have degrees of
membership. Fuzzy set theory permits the gradual
measuring of membership of elements in a fuzzy set,
which is described using the membership function
(, ) in the range of real numbers [0, 1]. In other
words, a fuzzy set is a pair (A, x) where S is a set
and (, ) [0,1] captures the degree of
membership of A degree of membership of

5 APPLYING THE PROPOSED SMM TO MTS

In this section we apply the proposed SMM to the
MTS through the following steps.

5.1 Step 1: Categorization of All Goals

Step 1 is to categorize the security goals in SRM
based on the mitigation technique they have been
derived by. As we discussed before we have four
different mitigation techniques. By the end of the
categorization process, no requirement will go under
the category of the NF. This is because no
requirement is derived by NF mitigation. An excerpt
of the SRM for MTS is given in figure 2. The top-
level security goal is to protect the MTS against
possible attacks (i.e., Protect [MTS]). MTS is
developed through several steps. Firstly we initiate
the SRM with refinement of the top-level goal to
protect the service. As a web service MTS also
should be reliable and available to users. From
identified assets we can specify the systems security
goals in the highest-level of the SRM to protect the
assets. The SRM may include other security
requirements too. But in this paper we only
concentrate on one of these goals (R1) to apply the
proposed SMM on. At level two of the SRM we also
have reduced the goals to only R1.1. This means for
instance to maintain security of bank accounts (R1)
in SRM includes other security goals which we have
eliminated them to simplify the model for applying
our proposed SMM. To categorize the goals in SRM
we look into formal specification of SRM to find
about the mitigation technique the goal is introduced
by. Otherwise it might be difficult and subjective to
94
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2):89-98
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)


categorize some of the goals as high level or low
level goals. Normally high level goals are the goals
which adding them to the model leads to radical
changes on the behavior of the target service.
Consider the situation in which the ID and Password
are guessed by the attacker and the MTS cannot
tolerate this security violence. In this case, we have
to add redundant behavior in terms of high level
security goal(s) to tolerate the threat. As its depicted
in figure 2, we may add supplementary
authentication mechanisms like challenge response
as high-level security goals to avoid unauthorized
access to accounts in case of violation of R1.1.3.2.
However, this new goals represent new behavior and
the closer to the top-level goal they are, the greater
the cost of implementation would be. The new goal
is OR-ed with the other high level goals. As it is
shown, the definition of high or low is comparative.
Better said, we call a goal as a high-level goal when
adding it to the systems SRM will cause radical
changes in the specification of the original security
requirement model. We have listed the categorized
security goals of SRM in Table 2 as follows. As it is
depicted in the table 2, we only have one RELAXed
[19] requirement (R.1.1.3.2) for the target web
service.

Table 2. Categorized Security Goals of MTS

Category Goal / Requirement
Add low level sub
goals (ALG)
R1.1.3.2.1.1.1,
R1.1.3.2.1.1.2,
R1.1.3.2.1.2.1,
R1.1.3.2.2.1.1,
R1.1.3.2.2.2.1
Relaxation (RLX) R1.1.3.2
Add High Level
Goal
(AHG)


R1
R1.1
R1.1.3, R1.1.4
R1.1.3.2.1, R1.1.3.2.2
R1.1.3.2.1.1, R1.1.3.2.1.2,
R1.1.3.2.2.1, R1.1.3.2.2.2
No refinement (NF) -

5.2 Step 2: Calculation of DS for Category ALG

In this step we calculate the DS for the low level
requirements (leaves) in SRM. The calculations are
performed based on equation (1) and listed in the
table 3 as follows. For example degree of security
for the low level goal of R1.1.3.2.2.1.1 which brings
about to limitation of number of password trials, is
equal to 0.122 which is the highest among the other
low level goals in SRM. Although enforcing
encryption contributes to an acceptable level of
mitigation but due to the comparatively low
technical ability and high cost can contribute only to
0.0535 of DS which is the lowest among all DSs in
Table 3.

Table 3. Calculation of Ds for Category ALG

Requirement C T I F DS
R1.1.3.2.1.1.1 30 0.7 90 0.1 0.06050
R1.1.3.2.1.1.2 50 0.5 70 0.1 0.05350
R1.1.3.2.1.2.1 5 0.9 30 0.1 0.07700
R1.1.3.2.2.1.1 5 0.9 80 0.1 0.12200
R1.1.3.2.2.2.1 5 0.9 60 0.1 0.10400
R1.1.4 20 0.9 90 0.1 0.07025

5.3 Step3: Calculation of DS for Categories AHG
and RLX

In this step we calculate the DS for high level
requirements in SRM. The calculations are
performed based on equation (1) and listed in the
table 4 as follows. In order to calculate the DS for
AHG goals, we need to firstly calculate the DS for
ALG goals as we did in Step 2. Then we propagate
the calculated values into the higher levels of the
SRM and recalculate the higher level goals DS by
factoring the flexibility factor into the calculation.
The flexibility factor as we described in section 3
will be calculated based on equation (4).
Concomitantly with calculation of DOF for high
level goals, we calculate the DS for RELAXed
goals. As we discussed before and based on equation
(4), measuring the DS for RELAXed goals in
proposed SMM , takes the fuzziness of RELAX
statements into account by incorporating the
membership function of corresponding fuzzy set
into account for calculation of flexibility of the goal.
This will be applied only on goals belonging to the
RLX category. How to propagate the calculated DS
to higher levels of the model depends on the relation
95
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2):89-98
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)


among nodes in logical model of SRM. If the node
in SRM (goal) is OR node, then the DS for that node
will be calculated based on sum of the descending
nodes. Otherwise if it is AND node, the DS will be
calculated based on production of the descending
nodes.

Table 4. Calculation of DS for Categories AHG and RLX

Category Requirement S P F DS

AHG

R1 0.28087 - 0.2 0.24043
R1.1 0.36174 - 0.2 0.28089
R1.1.3 0.52347 - 0.2 0.36174
R1.1.3.2.1 0.24019 - 0.2 0.22006
R1.1.3.2.2 0.31300 - 0.2 0.25650
R1.1.3.2.1.1 - 0.00324 0.2 0.10162
R1.1.3.2.1.2 0.07700 - 0.2 0.13850
R1.1.3.2.2.1 0.12200 - 0.2 0.16100
R1.1.3.2.2.2 0.10400 - 0.2 0.15200
RLX R1.1.3.2 - 0.05645 0.85 0.45322

We have RELAXed [19] the goal R1.1.3.2 in by
assigning the RELAX statement of as many as
possible to the relaxed attribute of the requirement
R1.1.3.2. So, R1.1.3.2 will be described as follows:

R1.1.3.2: OBS shall generally avoid [ID and
Password to Guess] as close as possible to
hardToGuess

The value hardToGuess is a constant value
representing the optimum value for difficulty of
guessing password and ID. hardToGuess is the
optimum value not definitely the maximum value.
On the other words, difficulty of guessing ID and
password might be less than the maximum value
while its still optimal. This is explained in terms of
fuzzy nature of RELAX semantic:

AG (( (avoid ID and Password to Guess)
hardToGuess) S)

Where S is a fuzzy set whose membership function
has value 1 at zero (m (0) = 1) and decreases
continuously around zero. (avoid ID and
Password to Guess) represents the hardness of
guessing the ID and password which will be
compared to hardToGuess. It means although we
cannot accurately measure the difficulty of guessing
the ID and password for OBS, the system model
should use the capabilities of security resources for
providing a best effort at protecting ID and password
from attacker. In order to calculate the DS for
RELAXed goal of R1.1.3.2, we need to both
calculate the DS for its descendants and
consequently calculate the S or P parameters and
also the flexibility of the goal. To calculate the
flexibility of the goal for R1.1.3.2 we need to
calculate the membership of the value
( (1.1.3.2 ) ) as
( (1.1.3.2 ) ) based on the
equation (4). We consider = 50 for
R1.1.3.2 which means the optimum difficulty value
to guess ID and password is equal to 50. Through
checking the MTS model against goal R1.1.3.2 of
MTS captured by SRM and in the presence of
security faults, we can calculate the (1.1.3.2) for
a specific number of running the model checker. In
our running example we consider (1.1.3.2) = 35
for R1.1.3.2. So we need to calculate the ( 15)
based on the membership function. We define the
membership function for satisfaction of goal
R1.1.3.2 in equation (5) as follows:

(1.1.3.2 ) = 1
(1.1.3.2 )50
100

(5)
From the equation (5) we have: (35 ) = 0.85 so
the value for flexibility of R1.1.3.2 will be equal to
0.85 according to the equation (4). Consequently we
can calculate the DS for the R1.1.3.2 after
propagation of previously calculated DS values for
its descendants. The results are listed in Table 4. As
you can see in the Table 4, for AND nodes in the
SRM we propagate the production of descendants so
the S attribute which is sum of the DSs of
descendant nodes is left blank in the table. For OR
nodes also the P attributes are left blank because we
propagate the sum of DSs of descendants to
calculate. If there is only one child for a node in the
logical model of SRM, we can consider it either as
an OR node or an AND node. In our running
example we considered those as OR nodes. The
example for this kind of node in SRM of MTS is
R1.1.3.
96
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2):89-98
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)


Step 4: Calculation of ODS for the MTS

In this step we calculate the overall degree of
security for the target web service of MTS. The
calculation is performed based on equation (2) as
follows. As you can see in the equation (2), in order
to calculate the ODS for the SRM, we need to
identify the severity of faults the security goals in
SRM mitigate. In our running example (MTS) we
assume the severity of faults as is listed in Table 5.
Severity of faults are assumed to be specified by
security experts and ranged from zero to one
hundred. Based on the results in Table 5 we can
calculate the ODS as follows:

=

=1

=1
=
195.11327
1035
0.189
The total degree of security for the MTS is
approximately equal to 0.189 which means if we
develop the target web service for MTS based on the
specification given by SRM and current model of
the system, the MTS will be able to tolerate the
security threats to the extent of 0.189. The higher the
ODS is the more tolerant the target web service
would be in the presence of security faults.

Table 5. Calculation of ODS

Category Requirement DS SEV DSSEV

AHG

R1 0.24043 100 24.04340
R1.1 0.28089 80 22.46945
R1.1.3 0.36174 75 27.13022
R1.1.3.2.1 0.22006 65 14.30385
R1.1.3.2.2 0.25650 65 16.67250
R1.1.3.2.1.1 0.10162 65 6.60520
R1.1.3.2.1.2 0.13850 40 5.54000
R1.1.3.2.2.1 0.16100 65 10.46500
R1.1.3.2.2.2 0.15200 40 6.08000
RLX R1.1.3.2 0.45322 70 31.72558
ALG
R1.1.3.2.1.1.1 0.06050 65 3.93250
R1.1.3.2.1.1.2 0.05350 65 3.47750
R1.1.3.2.1.2.1 0.07700 40 3.08000
R1.1.3.2.2.1.1 0.12200 65 7.93000
R1.1.3.2.2.2.1 0.10400 65 6.76000
R1.1.4 0.07025 70 4.91750


Figure 3 presents all the process required from
categorization of all the goals to calculation of
overall degree of security for the money transfer
system.



Figure 3. Steps required for calculation of ODS
6 CONCLUSION AND FUTURE WORKS

In this work we proposed a measurement model for
evaluating security in security requirement model of
the web services. Our proposed approach takes the
security requirement model of the system as the
input and measures degree of security in security
requirements based on mitigation techniques they
are refined through. The proposed model also takes
into consideration attributes such as cost, technical
ability, impact and flexibility of the security
countermeasures to measures security of the target
service. Consequently the overall degree of security
can be calculated and the evaluation results can
used to improve the security of the web service. To
demonstrate the validity of our model, we have
applied it to a typical money transfer service as our
running application.


97
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2):89-98
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)


REFERENCES

1. Haley, C.B., Moffett, J .D., Laney, R., Nuseibeh, B.: A
framework for security requirements engineering. In:
Proceedings of the 2006 international workshop on
Software engineering for secure systems, vol. Shanghai,
pp. 35--42. (2006)
2. Mead, N.R., Hough, E.D.: Security Requirements
Engineering for Software Systems: Case Studies in
Support of Software Engineering Education. In: Software
Engineering Education and Training, 2006. Proceedings.
19th Conference on, pp. 149--158. (2006)
3. Avizienis, A., Laprie, J .C., Randell, B., Landwehr, C.:
Basic concepts and taxonomy of dependable and secure
computing. In: Dependable and Secure Computing, IEEE
Transactions on, vol. 1, no. 1, pp. 1--33. (2004)
4. Mougouei, D., Moghtadaei, M., Moradmand, S.: A Goal-
Based Modeling Approach to Develop Security
Requirements of Fault Tolerant Security-Critical Systems,
in: Proceedings of 4th International Conference on
Computer and Communication Engineering, Malaysia,
pp. 200-205. (2012)
5. Zhang, W.: Integrated Security Framework for Secure
Web Services. In: Intelligent Information Technology and
Security Informatics (IITSI), Third International
Symposium on, pp. 17--183. (2010)
6. Mirtalebi, A., Khayyambashi, M.R.: Enhancing Security
of Web Services against WSDL Threats. In: Emergency
Management and Management Sciences (ICEMMS), 2nd
IEEE International Conference on, pp. 920923. (2011)
7. J iang, L., Chen, H., Deng, F. A Security Evaluation
Method Based on STRIDE Model for Web Service. In:
Intelligent Systems and Applications (ISA), 2010 2nd
International Workshop on, 2010, pp. 1--5. (2010)
8. Gonzalez, R.M., Martin, M.V., Munoz-Arteaga, J .,
Alvarez-Rodriguez, F., Garcia-Ruiz, M.A.: A
measurement model for secure and usable e-commerce
websites. In: Electrical and Computer Engineering, 2009.
CCECE 09. Canadian Conference on, pp. 77--82. (2009)
9. Lai, S.T.: An Interface Design Secure Measurement
Model for Improving Web App Security. In: Broadband
and Wireless Computing, Communication and
Applications (BWCCA), 2011 International Conference
on, pp. 422--427. (2011)
10. Fu, W., Zhang, Y., Zhu, X., Qian, J .: WSSecTool: A Web
Service Security Analysis Tool Based on Program
Slicing. Services (SERVICES), IEEE Eighth World
Congress on, pp. 179--183. (2012)
11. Aghdaie, N., Tamir, Y.: Client-transparent fault-tolerant
Web service. In: Performance, Computing, and
Communications, 2001. IEEE International Conference
on, pp. 209--216. (2001)
12. Santos, G.T., Lung, L.C., Montez, C.: FTWeb: a fault
tolerant infrastructure for Web services. In: EDOC
Enterprise Computing Conference, 2005 Ninth IEEE
International, pp. 95--105. (2005)
13. Cheng, B., Sawyer, P., Bencomo, N., Whittle, J ., A Goal-
Based Modeling Approach to Develop Requirements of
an Adaptive System with Environmental Uncertainty. In:
Model Driven Engineering Languages and Systems, vol.
5795, A. Schrr and B. Selic, Eds. Springer Berlin /
Heidelberg, pp. 468--483. (2009)
14. Merideth, M.G., Iyengar, A., Mikalsen, T., Tai, S.,
Rouvellou, I., Narasimhan, P.: Thema: Byzantine-fault-
tolerant middleware for Web-service applications. In:
Reliable Distributed Systems, 2005. SRDS 2005. 24th
IEEE Symposium on, pp. 131--140. (2005)
15. Edge, K.S.: A framework for analyzing and mitigating
the vulnerabilities of complex systems via attack and
protection trees. Air Force Institute of Technology,
Wright Patterson AFB, OH, USA. (2007)
16. Edge, K.S., Dalton, G.C., Raines, R.A., Mills, R.F.: Using
Attack and Protection Trees to Analyze Threats and
Defenses to Homeland Security. In: Military
Communications Conference, 2006. MILCOM 2006.
IEEE, pp. 1--7. (2006)
17. Sindre, G., Opdahl, A.L.: Eliciting security requirements
by misuse cases. in Technology of Object-Oriented
Languages and Systems, 2000. TOOLS-Pacific 2000.
Proceedings. 37th International Conference on, 2000, pp.
120--131. (2000)
18. Cheng, B., Sawyer, P., Bencomo, N., Whittle, J .: A goal-
based modeling approach to develop requirements of an
adaptive system with environmental uncertainty. In:
Model Driven Engineering Languages and Systems, A.
Schrr and B. Selic, Eds. Springer Berlin / Heidelberg,
pp.468--483. (2009)
19. Whittle, J ., Sawyer, P., Bencomo, N., Cheng, B., Bruel,
J .M.: RELAX: a language to address uncertainty in self-
adaptive systems requirement. In: Requirements
Engineering, vol. 15, no. 2, pp. 177--196. (2010)

98
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2):89-98
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
Power Amount Analysis: An efficient Means to Reveal the Secrets in
Cryptosystems
Qizhi Tian and Sorin A. Huss
Integrated Circuits and Systems Lab (ICS)
TU Darmstadt, Germany
Email: tian, huss@iss.tu-darmstadt.de


ABSTRACT

In this paper we propose a novel approach to
reveal the information leakage of cryptosys-
tems by means of a side-channel analysis of
their power consumption. We therefore in-
troduce first a novel power trace model
based on communication theory to better
understand and to efficiently exploit power
traces in side-channel attacks. Then, we dis-
cuss a dedicated attack method denoted as
Power Amount Analysis, which takes more
time points into consideration compared to
many other attack methods. We use the
well-known Correlation Power Analysis
method as the reference in order to demon-
strate the figures of merit of the advocated
analysis method. Then we perform a com-
parison of these analysis methods at identi-
cal attack conditions in terms of run time,
traces usage, misalignment tolerance, and
internal clock frequency effects. The result-
ing advantages of the novel analysis method
are demonstrated by mounting both men-
tioned attack methods for an FPGA-based
AES-128 encryption module.


KEYWORDS

AES-128 Block Cipher; Power Model;
Trace Model; Correlation Power Analysis;
Power Amount Analysis.


1 Introduction

In 1999, Kocher et al. introduced the
Differential Power Analysis (DPA) [1],
as a novel analysis method for revealing
the secret key of a cryptosystem. DPA
then became the premier approach for
exploiting the temporal power consump-
tion for practical side-channel attacks of
a cryptosystem. In the past decade, many
researchers addressed the side-channel
properties of cryptosystems and contrib-
uted their efforts to this area resulting in
both new and powerful side-channel
analysis methods next to DPA and in
related countermeasures. Thus, the re-
search in side-channel properties of
cryptosystem implementations may be
classified in two opposite domains: The
one is aimed to the development of effi-
cient analysis methods to eventually at-
tack the system, whereas the other is
dedicated to the invention or creation of
countermeasures to harden the system
and thus to reduce or even to avoid the
success of such attacks. In other words, a
still open competition between attack
and defense of cryptosystems has been
established meanwhile.
With regard to the attack methods,
Chari et al. published in 2002 a paper on
the so-called template attack [2]. In 2004,
Brier et al. proposed the Correlation
Power Analysis (CPA) method [3]. Later,
in 2005, the stochastic analysis approach
was introduced by Schindler et al. [4]. In
2012, Tian et al. [17] proposed an attack
method called Power Amount Analysis
(PAA) aimed to attack the cryptosystem
by exploiting a large set of time points,
which may contribute to information
99
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 99-114
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
leakage. Compared to the CPA attack,
the PAA attack as outlined in [17] shows
clear advantages in terms of run time,
traces usage, misalignment tolerance,
and internal Clock Frequency Effects
(CFE). In the area of the defense of
cryptosystems, on the other hand, a large
number of countermeasures aimed at
reducing the exploitable information
leakage, i.e., the hardening of a cryp-
tosystem, have been suggested as de-
tailed in, e.g., [7], [8].
The fundamental idea of the attacking
methods mentioned above is that adver-
saries mimic the variation of the power
consumption behavior of the cryptosys-
tem at hand in time domain by construct-
ing a key dependent power model and by
exploiting some mathematical functions.
Then, various statistical methods are ap-
plied to analyze the relation between
power model and measured power traces
such as correlation coefficients, least
squares, or maximum likelihood, aimed
to help to eventually unveil the secret of
the cryptosystem.
As discussed in [17], usually, the key
dependent power model is based on
some states produced by the crypto-
graphic operations and then stored in
registers of the cryptosystem hardware.
Although these states seem to change
instantaneously, it takes time in reality to
calculate and to store them. For instance,
if this process requires u.1ms and is be-
ing monitored by means of an oscillo-
scope operated at a sample rate of 1MHz,
i.e., the sample interval is 1u
-6
s, the re-
sulting 100 discrete points, which carry
part of the information leakage, will be
used to depict the results of this process
in the monitored power curve in time
domain. In other words, all these points
should be used to reveal the secret key of
the cryptographic system for the sake of
efficiency.
In the CPA attack, the secret of the
cryptographic system is indicated by the
highest correlation peak, i.e., the maxi-
mum similarity between the power trac-
es and the key dependent power model.
Compared to the other information car-
rying time points, the highest correlation
coefficient value comes from one certain
fixed time point out of the captured trac-
es. This means that the other time points
of the recorded power traces do not ex-
plicitly contribute to the information
leakage of this analysis method, they are
only used for reference purposes. In oth-
er words, in the CPA attack just one time
point is being exploited, while the other
time points, which clearly do contain
parts of the total information leakage,
are discarded.
Compared to the CPA attack, in both
the template attack and the stochastic
approach several time points are in fact
being used to identify the information
leakage in order to reveal the secret key
[4]. But their calculation complexity is
considerably larger than in, e.g., CPA:
The more time points are being used, the
more computational time and memory
space are needed. Note that in the profil-
ing phase of the template attack and sto-
chastic approach a certain amount of
traces has to be captured using an identi-
cal training device [5], which takes even
more execution time.
However, the PAA exploits a set of
time points contributing to the infor-
mation leakage in the attack without sig-
nificantly increasing the computation
effort. This property stems from a new
power trace model. Such a method is
able to exploit hundreds or even thou-
sands of time points for revealing the
secret key. In [17] the authors show that
the related computational time is consid-
erably less than needed for the CPA at-
tack.
100
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 99-114
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
Therefore, in this paper we discuss
more in-depth the PAA attacks proper-
ties and the resulting advantages when
mounting practical attacks. The paper is
structured as follows. In Section 2 we
first detail how CPA works and how its
trace model looks like. In Section 3, we
define a new trace model based on
communication theory and introduce the
information leakage extraction from this
model as well as a related attack proce-
dure and highlight the resulting ad-
vantages. Section 4 presents a compari-
son of measured results by executing
both CPA and the new analysis method
PAA under identical conditions on raw,
artificially misaligned, and clock fre-
quency distorted traces produced from
an FPGA-based cryptosystem running
AES-128 encryption. Finally, we con-
clude with a summary of the advantages
and benefits of this new analysis method.

2 Correlation Power Analysis

In this section some basic definitions
will be given first, which are used in this
and in the upcoming sections.

2.1 Basic Definitions

Input: A set of plaintext d with size B,
where J

represents the i
th
plaintext and
i |1, B].
Output: A set of ciphertext c with size
D, where c

represents the i
th
ciphertext,
which corresponds to the plaintext J

.
Subkey: All the possible subkey val-
ues form a set h with size K. For in-
stance, a subkey byte has 2
8
possible
key values, i.e., K = 2S6, where k

de-
notes the i
th
subkey value.
Power Trace Matrix T: It is construct-
ed from B power traces, captured by a
sampling oscilloscope, while the cryp-
tosystem is processing all inputs d. Each

trace has N sample points. I
I,1:M
holds
the i
th
measured power trace related to
input J

.
Analysis Region: Because there are a
large number of time points in the cap-
tured traces, we do not need to analyze
each and every of these points: A small
portion of time points containing the in-
formation leakages is our analysis target.
Therefore, we introduce an area of inter-
est called analysis region in the captured
traces, which contains the information
leakage both of the selected power mod-
el and the part of the consumption the
adversary focuses on. For instance, for
an AES-128 power trace, if the power
model is constructed on the basis of the
last round operation, then the analysis
region should be the area, where the last
round peak exists, as depicted in Figure
1.
Following abbreviations are applied
where appropriate: Expectation (E), Var-
iance (vai), Standard Deviation (Bev),
and Correlation Coefficient (CoiiCoef).

2.2 Model of Power Traces

The power traces are captured and
recorded by a sampling oscilloscope
while either encryption or decryption is
running. As a matter of fact, the exist-
ence of noise in the recorded power

Figure 1: Visualization of Analysis Region
101
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 99-114
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)

traces is inevitable in practice. The total
consumption of the cryptosystem may
then be determined as follows according
to [7, p. 62]:

P
totuI
= P
op
+ P
dutu
+ P
cI.nosc

+P
const
(2)

At each point in time of the recorded
trace the total power may thus be mod-
eled by (2), where P
op
is the operation
dependent power consumption, P
dutu

defines the data dependent power con-
sumption, P
cI.nosc
denotes power result-
ing from the electronic noise in the
hardware, which features a normal dis-
tribution, i.e., P
cI.nosc
J(u, o
2
) holds,
and P
const
represents, depending on the
technical implementation, some constant
power consumption. All these parame-
ters are additive, independent, and func-
tions of time. But the power model as
exploited in CPA is restricted to analyze
just a single point in time rather than the
complete power function in time domain.
CPA aims at a traversal of all the cap-
tured traces at a certain point in time to
find the biggest information leakage
point, i.e., the same time point, but in
different traces. Therefore, the precondi-
tion of CPA to mount an attack success-
fully is that the power consumption val-
ues at each time point are yielded by the
same operation in the cryptographic al-
gorithm. In other words, the power trac-
es must be correctly aligned in time as
pointed out in, e.g., [7, p.120].

2.3 Power Model

Power models are in general based on
both the algorithm running in the hard-
ware and its architecture. Considering,

e.g., the last round of the AES-128 algo-
rithm, the Hamming Distance (HD)
model of the output register before and
after the S-Box, respectively, as dis-
cussed in, e.g., [7, p. 132], is given by
(3):

E = Eommingwcigt(c

) (3)

where J

denotes a certain byte, e.g.,


the second byte of the register stored in
the last round before the S-Box, which is
the counterpart of c

. In contrast, the
Hamming Weight (HW) model of the
output register is given by:

Ew = Eommingwcigt(c

) (4)

Another possible classification of the
power model is proposed in the follow-
ing.
Instantaneous Model: A power model
based on the state at some time point of
a certain register, e.g., HW power model.
Process Model: A power model based
on the two states changing within a time
interval, e.g., HD power model.

2.4 CPA Attack Phase

The attack procedure may be summa-
rized as follows:
Step1: Plaintext d or ciphertext and
the subkey h are mapped by the power
model, for example exploiting (3) or (4),
to form a matrix, which is named hy-
pothesis matrix H of size B K.
Step2: Analysis of power trace matrix
T and hypothesis matrix H is performed
by calculating the correlation coefficient
during StatAnalysis as shown in (1),
which yields the result matrix R with
_
R
1,1
R
1,M

R
k,1
R
K,M
_ = StotAnlysis _
I
1,1
I
1,M

I
,1
I
,M
,
E
1,1
E
1,K

E
,1
E
,K
_ (1)
102
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 99-114
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
size K N. The elements of R are calcu-
lated from:

R
,]
= CorrCoc(I
1:,],
E
1:,
) (5)

where i |1, K] and ] |1, N] hold.
Then, the unique time point featuring the
maximum value of R is determined next,
which indicates the correct key value.

3 Power Amount Analysis

In this section, we introduce a trace
model to address the power consumption
in a quite different way, which relies on
principles adopted from communication
theory. Then, based on this model, a new
attacking method, i.e., PAA is proposed,
which is characterized by an exploitation
of a larger set of time points compared to
CPA. In PAA we exploit in general more
than one hundred points to efficiently
extract the information leakages and to
attack the cryptosystem successfully as
detailed in the sequel.

3.1 Hardware Model

Communication theory has been de-
veloped for more than one hundred years.
Many models were proposed and are
currently used to evaluate and simulate
the communication channel. Among the-
se models, there exists a simple and easy
one, which is named Additive White
Gaussian Noise (AWGN) channel, as
detailed in, e.g., [10, p. 167], [11]. A
discrete time AWGN channel is given as
follows:

0|i] = S|i] + N|i] (6)

where S|i] is the input signal of the
channel at the discrete time point i, 0|i]
denotes the output of the channel, and
N|i] represents the additive white Gauss-
ian noise while the input signal passes
through the channel. As generally as-
sumed in the communications field, for
the noise N J(u, o
2
) holds, see [10,
pp. 29-30].



Consequently, we model the power
trace of a cryptosystem based on the
communication model in (6). As shown
in Figure 2, the power consumption from
the core chip is taken as the input to the
channel and noise is being added while it
propagates. The time discrete trace of
the power consumption function, cap-
tured by the oscilloscope, now consists
of two parts, as visualized in Figure 3:
The first one is the power consumption
function of the cryptographic chip while
encryption or decryption runs and con-
tains the information leakage of the
cryptosystem; the second part contains
the noise produced by the hardware,
which can be modeled as in the AWGN

Figure 2: Abstract Signal and Noise Model

Figure 3: Visualization of the Power Traces
103
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 99-114
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
channel. Assume that the power con-
sumption of the core chip is pure, i.e.,
without noise, and its temporal value is
being transferred via the electric circuit
network to the oscilloscope. Meanwhile,
the AWGN adds the noise to it. Conse-
quently, for each measurement time
point, the power traces are modeled as
follows:

P
o
|i] = P
coc
|i] +N|i] (7)

where P
o
|i] represents the output power
consumption, which is captured by a
sampling oscilloscope at time index i ,
P
coc
|i] is the power consumption gen-
erated by the cryptographic core chip
while running encryption or decryption,
and N|i] is taken from the AWGN, i.e.,
for any measurement in time domain, the
noise features N J(u, o
2
) . Please
note as an important property that P
coc

and N are independent and uncorrelated
in time domain.
Let us assume that an attack using a
process model, e.g., the HD model, takes
place from time points m
1
to m
2
. Now,
we intend to calculate the power varia-
tion of the cryptographic chip from m
1

to m
2
, which contains the information
leakage the adversary is looking for, i.e.,
the power variation of the analyzed reg-
isters state changing, which matches the
HD model very well. Consequently, by
calculating Ior(P
coc
) in the time inter-
val |m
1
, m
2
] for each trace and then
comparing the similarity between the
variation of the power consumption of
the core chip and the key dependent hy-
pothesis matrix, one can eventually re-
trieve the secret key of the cryptographic
system.
However, we cannot measure the
P
coc
values directly. Each time point of
the measurement contains a mixture of
power consumption P
coc
and noise N
as defined by (7), so that one cannot
separate them easily. Therefore, a
straight forward calculation of
Ior(P
coc
) is difficult, but we will show
in the sequel how to derive it indirectly.

3.2 Power Consumption of the Hard-
ware Module

In reality, when a hardware device is
working, its power consumption is a
continuous function in time domain. But
when a sampling oscilloscope is being
used to monitor this power function, on-
ly discrete points will be captured. These
discrete points constitute the curve P
o
,
where P
o
|i] is the instantaneous power at
time index i.
The average power consumption P

o
in
the time index interval |m
1
, m
2
] can be
approximately calculated by

P

o
=
1
m
2
m
1
+ 1
|P
o
|m
1
] +
+P
o
|m
2
]]
(8)

Equation (8) denotes that the average
power consumption is just the mean of
the power values within |m
1
, m
2
]. If one
increases the sample rate, i.e., more
sample points in the interval |m
1
, m
2
],
then P

o
becomes an increasingly precise
estimator of E(P
o
). Because E(N) = u
holds, (8) can be rewritten as follows:

E(P
o
) = E(P
coc
) +E(N)
= E(P
coc
)
(9)

Here E(P
o
) = E(P
coc
), i.e., the mean
value for the captured power traces de-
notes the average power consumption of
the core chip in the time index interval
|m
1
, m
2
]. We can assume that such an
average value contains the constant av-
erage power from the hardware circuits
its self and the average power variation
104
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 99-114
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
from the state changing in the time index
interval |m
1
, m
2
]. We prefer to identify
the power variation for the states chang-
ing in the register to the constant average
power from hardware itself. However,
the constant power of hardware circuits
is difficult to determine. Therefore, fil-
tering out the variation information form
E(P
o
) seems to be impossible.
Now let us look at the Ior(P
coc
) cal-
culation as follows:

Ior(P
coc
) = E|P
coc
E(P
coc
)]
2
(10)

which denotes the average power varia-
tion around the average power E(P
coc
)
for the register states changing in the
time interval |m
1
, m
2
] the adversary
looks for. It contains two steps: in the
first step, the information is compressed
by calculation of the mean power con-
sumption E(P
coc
) . However, such a
compression is not sufficient to being
used for key revealing. Therefore each
sample is compared to E(P
coc
) resulting
in some differences. After that, the aver-
age differences power value is calculated
to form vai(P
coc
), which is the infor-
mation carrier the adversary is looking
for. This is called the information extrac-
tion step. Indeed, vai(P
coc
) is very im-
portant for the key revealing in the cryp-
tosystem, but it cannot be measured or
calculated directly.
Nevertheless, Ior(N) = o
2
holds and
P
coc
and N are independent as well as
uncorrelated in time domain. From (9)
we can easily get Ior(P
o
) as:

Ior(P
o
) = Ior(P
coc
) + Ior(N)
= E(P
coc
) + o
2

(11)

Equation (11) consists of two parts:
The first part is Ior(P
coc
), the second
part is o
2
, i.e., the noise in a trace matrix
has the same variance o
2
for each single
trace, which is the fundamental property
of the new trace model as mentioned be-
fore. The value o
2
is a constant, thus
items vai(P
o
) and vai(P
coc
) are in a
linear relation. Now, instead of the cal-
culation of Ior(P
coc
) , one just com-
pares the similarity between Ior(P
o
)
and H yielding the same results.

c:(P
o
) = Ior(P
coc
) +o
2

(12)

x = 1 +
1
2
(x 1) +
1
8
(x 1)
2

+
(13)

c:(P
o
) is a non-linear function be-
cause of the relation by a square root as
given in (12). Therefore, we cannot use
it as the substitution of Ior(P
coc
) to
attack the system. Nevertheless, the
square root can be expanded to a Taylor
series as given in (13), in which the first
two terms of the series result in a linear
relation. Thus, Ior(P
coc
) and c:(P
o
)
can approximately be taken as being in a
linear relation. So, we can exploit this
approximation to further analyze the
power consumption of the system.

3.2 Attack Phase

An important property of PAA is that
an usage of more time points in the anal-
ysis region involved results in less traces
needed for a successful attack. There are
two ways to achieve this goal:
1) Use more time points in the anal-
ysis region for the attack.
2) Increase the sample rate of the
monitor devices, i.e., use an oscil-
loscope of a good quality, e.g.,
with a higher sampling rate.
Theoretically, the proposed PAA takes
the time interval factors into considera-
tion, thus the process model fits such an
attack very well. On the contrary, the
105
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 99-114
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)

instantaneous model, e.g., HW model,
focuses on some time point only. By us-
ing such a model the attack results of
PAA will be not as good as we would
expect.

3.2.1 Attacking Procedure

The attacking procedure of the PAA is
given as follows:
Step1: Plaintext d or ciphertext and
the subkey h are mapped by the power
model, e.g., by equation (3), thus gener-
ating the hypothesis matrix H of size
K.
Step2: Calculate the variance or
standard deviation of each row of the
trace matrix T, i.e., I
,1
= Ior(I
,1:M
) ,
where i |1, ] holds, as given in (14).
Step3: Calculate the results matrix R
with size 1 K by analyzing statistically
the vector V derived from (14) and the
hypothesis matrix H according to (15),
whereas the correlation coefficient is
used as the distinguisher:

R
1,
= CorrCoc(I
1:,1,
E
1:,
) (16)

where i |1, K] holds. Subsequently,
the maximum correlation value will be
determined to find the correct key value.

3.3 Advantages

Usually, in the view of an attacker,
some factors must be taken into consid-
eration in a practical attack. For example,

the key should be revealed within lim-
ited time and traces usage. If the targeted
algorithm is hardened by a countermeas-
ure, e.g., the power traces captured from
oscilloscope are not aligned because of a
related countermeasure, such as random
clock or dummy wait state insertion,
then before mounting a CPA attack some
preprocessing should be done. Com-
pared to the CPA attack, the proposed
PAA method can deal with the men-
tioned requirements easily, which will be
discussed in the upcoming sections.

3.3.1 Execution Time

The required execution or run time is
a very important metric, which indicates
the efficiency of the algorithm in a real
attack. In PAA a large number of time
points is taken into the variance or de-
viation calculation. Therefore, the trace
matrix T is mapped to V, see (14), which
will be used to calculate the correlation
coefficient using the hypothesis matrix
H as shown in (15). On the contrary, in
the CPA attack, the correlation coeffi-
cient values matrix is directly calculated
from the trace matrix T and the hypothe-
sis matrix H, see (1). Therefore, under
the same calculation condition, i.e., the
number of time points CPA needs to
traverse, the variance in PAA attack is
being calculated, i.e., the PAA attack is
faster than CPA attack. One can also
find that in CPA the result is a matrix R
with size K N. In contrast, PAA yields
Ior(I) = _
Ior(
I
1,1
I
1,M
)

Ior(
I
,1
I
,M
)
_ = _
I
1,1

I
,1
_ (14)

|R
1,1
R
1,K
] = StotAnlysis _
I
1,1
I
1,M

I
,1
I
,M
,
I
1,1

I
,1
_ (15)
106
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 99-114
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)

a vector R, with size 1 K. Therefore,
the calculation complexity is decreased,
because it is easier to identify the correct
key by means of a result vector than of a
matrix. We will focus on the run time
consumed in both attack methods by
means of experimental results in the se-
quel.

3.3.1 Traces Usage

Traces usage is an important parame-
ter in the evaluation of cryptosystem se-
curity. From an attacker's point of view,
the less trace usage, the more efficient
the attack method will be. In contrast,
for a system designer, to some extent it
denotes the degree of safety of the cryp-
tographic algorithms. Therefore, traces
usage reduction is crucial to come quick-
ly to an assessment of the related SCA
resistance level of a cryptosystem.
In general, the PAA attack calculates
the variance and standard deviation for
the captured power traces, where the in-
formation leakages at each single time
point is compressed and extracted, i.e.,
more information leakages sources are
considered. By means of such a method,
one can achieve higher distinguisher
values when the number of power traces
is limited. In contrast, CPA just exploits

one time point which contributes the
maximum information leakage. There-
fore, such an attack method requires a
relatively large number of power traces
to achieve the same attack results as in
PAA. In the last section, we will demon-
strate this important feature.

3.3.3 Misalignment Tolerance

T = _
I
1,3
I
1,4
T
1,5
I
2,3
I
2,4
T
2,5
I
3,3
I
3,4
T
3,5

I
1,6
I
1,7
I
1,8
I
2,6
I
2,7
I
2,8
I
3,6
I
3,7
I
3,8
_
(17)
T

= _
I
1,3
I
1,4
T
1,5
I
2,4
T
2,5
I
2,6
I
3,2
I
3,3
I
3,4

I
1,6
I
1,7
I
1,8
I
2,7
I
2,8
I
2,9
T
3,5
I
3,6
I
3,7
_
(18)

As mentioned before, the model of the
power traces in CPA attack concentrates
on a common time point in different
power traces. For example, (17) denotes
a matrix constructed from aligned power
traces. The third column contains the
maximum information leakage point, i.e.,
the elements I
1,5
, I
2,5
, I
3,5
, are the best
combination for the information contri-
bution. If there are some misalignments
in the constituting such power traces as
indicated in (18), the third columns

Figure 4: Power Traces at different Base Clock Frequencies: a) 2 MHz, b) 24 MHz
107
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 99-114
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
combination is broken. Then the new
combination I
1,5
, I
2,6
, I
3,4
in the third
column cannot provide the maximum
leakage for the CPA attack. Therefore,
the attack results become worse. In other
words, the prerequisite for mounting a
CPA attack successfully is that the pow-
er traces must be aligned. However, in
the PAA attack a large number of time
points are taken to the variance calcula-
tion. Therefore, for each misaligned
power trace compared to the original
power trace, only a few time points are
missing. Thus, the difference between
the second rows of T and T

, respectively,
is that I
2,3
is substituted by I
2,9
. If there
are enough time points in the interval,
then such a substitution cannot greatly
impact the variance values and hence the
overall attack results. Consequently,
PAA features a considerably stronger
misalignment tolerance during a real at-
tack. In other words, a small misalign-
ment does not affect the final results to a
large extent. Therefore, such a property
of the analysis method can be exploited
to improve attacks of some power traces
featuring a misalignment injection as a
hardening countermeasure.

3.3.4 Clock Frequency Effects

Misalignment may be a countermeas-
ure to impede CPA attacks in practice,
see [13]. In presence of misaligned pow-
er traces, a preprocessing is required for
improving the CPA attack results in or-
der to cope with traces manipulated by
changes in the clock frequency of the
cryptosystem as shown in Figure 4 a),
where the aimed peaks are shifted in
time domain. The authors of [18] pro-
posed a method to align misaligned
power traces by exploiting dynamic time
warping. The authors of [15] presented a
horizontal alignment method to align the
power traces in time domain both par-
tially and dynamically. Later, in [16],
these authors reported on a phenomenon
called clock frequency effect, which oc-
curs in random clock featured cryptosys-
tem, when the base clock runs at a high-
er clock frequency. Then the power
peaks in the captured traces not only
shift in time domain, but also change
their power values in the amplitude do-
main. One finds easily that the power
value change in Figure 4 b) is considera-
bly larger than that in Figure 4 a), where
the base clock frequencies are 24MHz
and 2MHz, respectively. In order to cope
with such effects, these authors proposed
a vertical matching after the horizontal
alignment, where each horizontally
aligned power trace is moved up and
down in the amplitude domain in order
to find the minimal distance between the
moved trace and an arbitrarily chosen
template. Finally, these vertically
matched power traces are attacked. The
experimental results in [16] show that by
exploiting vertical matching as a pre-
processing step, the efficiency of the
CPA attack can greatly be improved.
This is because in the CPA attack the
focus is on finding a certain time point in
different power traces only. Let us take
T
2,3:9
as an example and shift its element
values in the amplitude domain, i.e., to
each element the same positive or nega-
tive value o is added, an operation,
which does not change its variance. In
other words, regardless of a possible
shifting in the amplitude domain, the
attack results will always be the same as
visible from (20).

T
2,3:9
+ u = |I
2,3
+ o, , I
2,9
+ o]
T

(19)

Ior(T
2,3:9
+u) = Ior(T
2,3:9
) (20)

108
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 99-114
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
Therefore, for attacking misaligned
power traces, only the horizontal align-
ment is required. The vertical matching
step in PAA attack can be completely
omitted in contrast to CPA. This proper-
ty saves a lot of traces processing time
without affecting the quality in the attack
results.

4 Application Examples

In this section several attacks on an
AES-128 cryptosystem featuring the
TBL S-Box [12] are presented and dis-
cussed. The HD model from (3) is being
taken as the power model. In order to
evaluate and to compare the mentioned
four main properties to be taken as a
metric, the results are produced by
mounting the attack exploiting both CPA
and PAA, respectively. The experiments
were ordered into three sections as fol-
lows:

1) Attack of the captured power traces
directly with CPA and PAA, re-
spectively.
2) The captured power traces will be
misaligned artificially to some ex-
tent and then be attacked by
mounting CPA and PAA, respec-
tively, in order to assess the misa-
lignment tolerance and thus the ro-
bustness of both attack methods.
3) In order to verify the internal clock
frequency effects for the PAA at-
tack, each captured power trace
will be shifted in the amplitude
domain by some random offset, i.e.,
a high clock frequency effect injec-
tion takes place. Finally, the attack
results for both CPA and PAA are
compared.
Here, the run time and success rate for
each byte and global key will be exploit-
ed as the metric to evaluate both the
CPA and PAA attack results. The run
time is a relative value, which depends
on the calculation computers processor,
memory, configurations, etc. The suc-
cess rate is detailed in [14], which de-
fines the possible rate such that all the
key bytes are to be successfully recov-
ered under the constraint of a limited
amount of experiments. Therefore, we
ran 30 times different attack experiments,
each experiment with 1000 power traces.
Then the success rate is calculated ac-
cordingly.

4.1 Platform

The side channel attack standard eval-
uation board version G (SASEBO) [6] is
exploited as the target platform, which
embodies two Xilinx Virtex-II pro series
FPGAs: One for board control and one
for cryptographic algorithms implemen-
tation. Both FPGAs are running at
2MHz clock frequency.

4.2 Run Time and Traces Usage


In order to compare the run time for
both CPA and PAA attacks, we executed
these two methods under the same con-

Figure 5: Global Success Rate for CPA and
PAA
Table 1: Run Time Comparison

CPA PAA Ratio
Run
Time
54.44s
Var 33.29s 60.0%
Std 33.54s 60.5%
109
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 99-114
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
ditions. For the CPA attack, we traverse
600 time points in the analysis region to
find the maximum information leakage.
In PAA attack, these 600 time points
will be taken to the variance or standard
deviation calculation, respectively. After
that, the total run time values for all 16
key bytes are compared.
Table I shows the run times for all 16
attackable bytes when mounting CPA
and PAA attacks. When applying the
methods Var and Dev, the PAA attack
results to run times of 33.29s and 33.54s,
respectively, which in both cases result
in 60.0% and 60.5% of the run time con-
sumed in the CPA attack. Therefore, un-
der the same attack condition, the PAA
is faster than CPA, which can thus short-
en the breaking time of a cryptosystem
in practical attacks.
With regard to the traces usage Figure
5 illustrates that, when we consider the
power traces range of from 0 to 1000, all
16 bytes are revealed after an usage of
850 traces only by PAA, i.e., the global
success rate is 1. However, under the
same condition, CPA can reveal only
90% correct keys when consuming all
available 1000 power traces, i.e., such an
attack needs more power traces to reveal
all correct key bytes. At the same time,
the success rate curve for PAA attack
raises faster after 400 power traces usage
compared to its counterpart in CPA at-
tack. This is because, PAA exploits more
time points, which contribute to the in-
formation leakage, thus resulting in a
lower traces usage in comparison to
CPA. In the Appendix, we visualize in
Figure 10 the success rate individually
for each key byte. One easily finds out
from this figure that for each byte to be
revealed, PAA always consumes less
traces than the CPA method.

4.3 Misalignment Tolerance
As mentioned above, we expect that
the PAA attack shows a good robustness
in presence of a reasonably misalign-
ment of traces. Sometimes, the power
trace misalignment is intended as a hard-
ening countermeasure against power
consumption attacks [13]. In order to
demonstrate this additional robustness
feature, the misaligned traces are first
generated by applying Algorithm 1 and
then attacked by means of both CPA and
PAA, respectively.
In order to generate comparable re-
sults for CPA and PAA attacks, we set
the range B of the random number in Al-
gorithm 1 from 0 to 20, 50, and 100, re-
spectively. The global success rate
curves for CPA and PAA attacks before
misalignment (MA) are depicted for
comparison purposes as illustrated in
Figure 6 to 8.



Figure 6: Global Success Rate, B=20
Algorithm 1 Misaligned Traces Genera-
tion
Require: Aligned Traces T
,1:M

Ensure: Misaligned Traces T

,1:w

1: Find a start point o in T
,1:M

2: Generate an integer random num-
ber r, r |u, B]
3: Cut the traces from point o +r,
with width w.
4: Save the cut trace into set T

,1:w

Return: T

,1:w

110
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 99-114
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)

Figure 7: Global Success Rate, B=50


Figure 8: Global Success Rate, B=100

In Figure 6, the maximum shifting of
the power traces is 20 time points. One
finds that the global success rates for
PAA attack before and after misalign-
ment are overlapped, i.e., the misalign-
ment cannot affect the PAA attack re-
sults greatly. For CPA, the global suc-
cess rate curve deviates a bit from its
counterpart before misalignment, i.e., a
small misalignment affects the CPA at-
tack not that much.
Then we increase the maximum shift-
ing to 50 points, as shown in Figure 7.
For PAA, the success rate curves still
overlap. In contrast, in CPA, because of
the stronger misalignment, the attack
becomes harder, and then the deviation
of the success rate curves before and af-
ter misalignment is enlarged. Thus,
when increasing the maximum number
of shifting time points, the deviation of
PAA attack results is smaller than that in
CPA, i.e., PAA is more robust.
In order to show this characteristic
more clearly, the parameter B is now set
to 100, i.e., the maximum shifting of the
power trace is 100 time points. Now for
PAA, the success rate curves show a
small deviation. However, in CPA, the
deviation between the corresponding
curves unveils a big gap as shown in
Figure 8. Therefore, we can state that the
PAA features a considerably stronger
misalignment tolerance compared to
CPA.

4.4 Internal Clock Frequency Effects

In order to simulate the clock fre-
quency effects environment condition,
mentioned in [16], Algorithm 2 is used
to inject such effects into power traces
by moving each power trace in the am-
plitude domain by a random offset vec-
tor r.



Figure 9: Global Success Rate F=10
Algorithm 2 Clock Frequency Effects
Injection (CFEI)
Require: Aligned Traces T
,1:M

Ensure: CFE Injected Traces T

,1:M

1: Generate an integer random num-
ber r, r |F, F] holds
2: Generate an H elements constant
vector r, where r = |r
1
, r
M
]
3: Do T

,1:M
= T
,1:M
+r
Return: T

,1:M

111
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 99-114
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
The chosen parameter value is F = 1u,
thus each trace will be moved by an off-
set from the interval |1u,1u] unit in
the amplitude domain. Then, CPA and
PAA are mounted.
As already mentioned, a power trace
shifts in amplitude domain results in un-
changed variance and standard deviation
values. Therefore, for PAA, the attack
results are always the same, as illustrated
in Figure 9. One finds that the success
rate curves for PAA attack are complete-
ly overlapped, i.e., the numerical results
are almost the same. In contrast, in CPA
after the CFEI, the success rate is de-
creased from 90% to just 10% at 1000
traces usage. This means that the clock
frequency effects decline the success
rate in CPA attack considerably for ran-
dom clock hardened cryptosystems. In
contrast, the PAA method can counteract
such shifts in the amplitude domain au-
tomatically and yields the same attack
efficiency as for the original data set.

5 Summary

In this paper we discussed in detail the
Power Amount Analysis method, which
is based on a new trace model originat-
ing from communication theory. This
novel SCA analysis exploits many time
points within the power traces that con-
tribute to the information leakage in the
captured traces and thus helps signifi-
cantly in revealing the secret key. Start-
ing from the original PAA paper, we
first performed a comparison to the well-
known CPA attack and we then elabo-
rated four advantages of the proposed
methods in terms of run time, traces us-
age, misalignment tolerance, and internal
clock frequency effects. These ad-
vantages were demonstrated by mount-
ing both CPA and PAA attacks on the
power traces captured from an FPGA-
based AES-128 cryptosystem. We have
shown that the advocated analysis meth-
od takes advantage in presence of both
aligned and misaligned power traces. We
see PAA as a new means, which pro-
vides a different way to view and to un-
derstand power traces. Its specific prop-
erties help to reveal the secret key in
cryptosystems more easily and thus to
qualify the security of cryptographic al-
gorithm implementations.

Acknowledgement

This work was supported by CASED
(www.cased.de).


6 REFERENCES

1. Paul C. Kocher, Joshua Jaffe, and Benjamin
Jun, Differential Power Analysis, Interna-
tional Cryptology Conference (CRYPTO),
1999, pp.388-397.
2. Suresh Chari, Josyula R. Rao, and Pankaj
Rohatgi, Template Attacks, Cryptographic
Hardware and Embedded Systems (CHES),
2002, pp. 13-28, Springer-Verlag.
3. Eric Brier, Christophe Clavier, and Francis
Olivier, Correlation Power Analysis with a
Leakage Model, Cryptographic Hardware
and Embedded Systems (CHES), 2004, pp.
16-29, Springer-Verlag.
4. Werner Schindler, Kerstin Lemke, and
Christof Paar, A Stochastic Model for Dif-
ferential Side Channel Cryptanalysis, Cryp-
tographic Hardware and Embedded Systems
(CHES), 2005, pp. 30-46, Springer-Verlag.
5. Werner Schindler, Advanced Stochastic
Methods in Side Channel Analysis on Block
Ciphers in the Presence of Masking, J. of
Mathematical Cryptology 2(3), 2008, pp.
291-310.
6. N. N., Research Center for Information Se-
curity National Institute, Side Channel At-
tack Standard Evaluation Board Version G
Specification, 2008, http://www.morita-
tech.co.jp/SASEBO/en/board/index.html.
7. Stefan Mangard, Elisabeth Oswald, and
Thomas Popp, Power Analysis Attacks: Re-
vealing the Secrets of Smart Cards
112
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 99-114
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
(Advances in Information Security), 2007,
Springer-Verlag, New York, USA.
8. Kai Schramm and Christof Paar, Higher
Order Masking of the AES, Cryptographers
Track of RSA Conference (CT-RSA), 2006,
pp. 208-225, Springer-Verlag.
9. Danil Sokolov, Julian P. Murphy, Alexandre
V. Bystrov, and Alexandre Yakovlev, De-
sign and Analysis of Dual-Rail Circuits for
Security Applications, IEEE Trans. On
Computers, 2005, vol. 54, pp. 449-460.
10. David Tse and Pramod Viswanath, Funda-
mentals of Wireless Communication, 2005,
Cambridge University Press.
11. Andrea Goldsmith, Wireless Communica-
tions, 2005, Cambridge University Press.
12. Atri Rudra, Pradeep K. Dubey, Charanjit S.
Jutla, Vijay Kumar, Josyula R. Rao, and
Pankaj Rohatgi, Efficient Rijndael Encryp-
tion Implementation with Composite Field
Arithmetic, Cryptographic Hardware and
Embedded Systems (CHES), 2001, pp. 175-
188, Springer-Verlag.
13. Shengqi Yang, Wayne Wolf, Narayanan
Vijaykrishnan, Dimitrios N. Serpanos, and
Yuan Xie, Power Attack Resistant Cryp-
tosystem Design: A Dynamic Voltage and
Frequency Switching Approach, ACM/IEEE
DATE, 2005, pp. 64-69.
14. Francois-Xavier Standaert, Tal Malkin and
Moti Yung, A Unified Framework for the
Analysis of Side-Channel Key Recovery At-
tacks, EUROCRYPT, 2009, pp. 443-461,
Springer-Verlag.
15. Qizhi Tian, Abdulhadi Shoufan, Marc
Stoettinger, and Sorin A. Huss, Power Trace
Alignment for Cryptosystems featuring
Random Frequency Countermeasures, IEEE
Int. Conf. on Digital Information Processing
and Communications, 2012.
16. Qizhi Tian and Sorin A. Huss, On Clock
Frequency Effects in Side Channel Attacks
of Symmetric Block Ciphers, IEEE Int. Conf.
on New Technologies, Mobility, and Securi-
ty, 2012.
17. Qizhi Tian and Sorin A. Huss, Power
Amount Analysis: Another Way to Under-
stand Power Traces in Side Channel Attacks,
IEEE Int. Conf. on Digital Information Pro-
cessing and Communications, 2012.
18. Jasper G. J. van Woudenberg, Marc F. Wit-
teman, and Bram Bakker, Improving Differ-
ential Power Analysis by Elastic Alignment,
Cryptographers Track of RSA Conference
(CT-RSA), 2011, pp. 104-119, Springer-
Verlag.
























































113
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 99-114
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
7 Appendix




Figure 10: Success Rate for each Byte in CPA and PAA attacks

114
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 99-114
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)

International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 115-121
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)

Certificate Revocation Management in VANET

Ghassan Samara
Department of Computer Science, Faculty of Science and Information Technology, Zarqa University
Zarqa, Jordan.
Gsamara@zu.edu.jo


ABSTRACT
Vehicular Ad hoc Network security is one of
the hottest topics of research in the field of
network security. One of the ultimate goals in
the design of such networking is to resist
various malicious abuses and security attacks.
In this research new security mechanism is
proposed to reduce the channel load resulted
from frequent warning broadcasting happened
in the adversary discovery process
Accusation Report (AR) - which produces a
heavy channel load from all the vehicles in the
road to report about any new adversary
disovery. Furthermore, this mechanism will
replace the Certificate Revocation List (CRL),
which cause long delay and high load on the
channel with Local Revocation List (LRL)
which will make it fast and easy in the
adversary discovery process.

KEYWORDS

Secure Certificate Revocation; Local
Certificate Revocation; VANET; Certificate
Management; VANET Security.

1. INTRODUCTION

Traffic congestion is the most annoying
thing that any driver in the world dreaming
of avoiding it, a lot of traveling vehicles
may cause problems, or facing problems
that must be reported to other vehicles to
avoid traffic overcrowding, furthermore,
there are a lot of vehicles may send
incorrect information, or a bogus data, and
this could make the situation even worse.
Recent research initiatives supported by
governments and car manufacturers seek
to enhance the safety and efficiency of
transportation systems. And one of the
major topics to search is "Certificate
Revocation".
Certificate revocation is a method to
revoke some or all the certificates that the
problematic vehicle has, this will enable
other vehicles to avoid any information
from those vehicles, which cause
problems.
Current studies suggest that the Road Side
Unit (RSU) is responsible for tracking the
misbehavior of vehicles and for certificate
revocation by broadcasting Certificate
Revocation List (CRL). RSU also
responsible for the certificate
management, communication with
Certificate Authority (CA), warning
messages broadcasting, communicating
with other RSUs. RSU is a small unit will
be hanged on the street columns, every 1
KM [2] according to DSRC 5.9 GHZ
range.
In vehicular ad hoc networks most of road
vehicles will receive messages or
broadcast sequence of messages, and they
dont need to consider all of these
Messages, because not all vehicles have a
good intention and some of them have an
Evil-minded.
Current technology suffers from high
overhead on RSU, as RSU tacking
responsibility for the whole Vehicular
Network (VN) Communication.
115

International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 115-121
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)

Furthermore, distributing CRL causes
control channel consumption, as CRL
need to be transmitted every 0.3 second
[3]. Search in CRL for each message
received causes a processing overhead for
finding a single Certificate, where VN
communication involves a kind of periodic
message being sent and received 10 times
per second.
This research proposes mechanisms that
examine the certificates for the received
messages, the certificate indicates to
accept the information from the current
vehicle or ignore it; furthermore, this
research will implement a mechanism for
revoking certificates and assigning ones,
these mechanisms will lead better and
faster adversary vehicle recognition.

2. RESEARCH BACKGROUND

In the previous published work [1],
security mechanisms were proposed to
achieve secure certificate revocation, and
to overcome the problems that CRL
causes.
Existing works on vehicular network
security [4], [5], [6], and [7] propose the
usage of a PKI and digital signatures but
do not provide any mechanisms for
certificate revocation, even though it is a
required component of any PKI-based
solution.
In [8] Raya presented the problem of
certificate revocation and its importance,
the research discussed the current methods
of revocation and its weaknesses, and
proposed a new protocols for certificate
revocation including : Certificate
Revocation List (CRL), Revocation using
Compressed Certificate Revocation Lists
(RC
2
RL), Revocation of the Tamper Proof
Device (RTPD) and Distributed
Revocation Protocol (DRP) stating the
differences among them. Authors made a
simulation on the DRP protocol concluding
that the DRP protocol is the most
convenient one which used the Bloom
filter, the simulation tested a variety of
environment like: Freeway, City and
Mixing Freeway with City.
In [9] Samara divided the network to small
adjacent clusters and replaced the CRL
with local CRL exchanged interactively
among vehicles, RSUs and CAs. The size
of local CRL is small as it contains the
certificates for the vehicles inside the
cluster only.
In [10] Laberteaux proposed to distribute
the CRL initiated by CA frequently. CRL
contains only the IDs of misbehaving
vehicles to reduce its size. The distribution
of the received CRL from CA is made
from RSU to all vehicles in its region, the
problem of this method is that, not all the
vehicles will receive the CRL (Ex: a
vehicle in the Rural areas), to solve this
problem the use of Car to Car (C2C) is
introduced, using small number of RSUs,
transmitting the CRL to the vehicles.
In [3] the eviction of problematic vehicles
is introduced, furthermore, some
revocation protocols like: Revocation of
Trusted Component (RTC) and Leave
Protocol are proposed.
In [11] some certificate revocation
protocols were introduced in the traditional
PKI architecture. It is concluded that the
most commonly adopted certificate
revocation scheme is through CRL, using
central repositories prepared in CAs. Based
on such centralized architecture, alternative
solutions to CRL could be used for
certificate revocation system like
certificate revocation tree (CRT), the
Online Certificate Status Protocol (OCSP),
and other methods where the common
requirement for these schemes is high
availability of the centralized CAs, as
frequent data transmission with On Board
Unit (OBUs) to obtain timely revocation
116

International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 115-121
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)

information may cause significant
overhead.

3. PROPOSED SOLUTION

In the previous published work in [1] the
proposed protocols for message checking
and certificate revocation were the
following:
Message Checking:
In this approach any vehicle receives a
message from any other vehicle takes the
message and checks for the sender
certificate validity, if the sender has a
Valid Certificate (VC), the receiver will
consider the message, in contrary, if the
sender has an Invalid Certificate (IC) the
receiver will ignore the message,
furthermore, if the sender doesnt have a
certificate at all, the receiver will report to
the RSU about the sender and check the
message if it is correct or not, if the
information received was correct RSU will
give a VC for the sender, else RSU will
give IC for it, and register the vehicles
identity into the CRL. See figure 1 for
message checking process.

Figure 1. Message checking procedure
Certificate Revocation:
Certificate revocation is done when any
misbehaving vehicle having VC is
discovered, where RSU replaces the old
VC with new IC, to indicate that this
vehicle has to be avoided and this happens
when more than one vehicle reporting to
RSU that a certain vehicle has a VC and
broadcasting wrong data. See figure 2, this
report must be given to RSU each time that
any receiver receives information from
sender and finds that this information is
wrong.

Figure 2. Certificate revocation procedure
The revocation will be as follows, a sender
sen sends a message to receiver rec; this
message may be from untrusted vehicle, so
receiver sends Message to RSU to acquire
Session Key (SKA), RSU replay message
Containing SK Reply (SKR), this message
contains the SK assigned to the current
connection, this key is used to prevent
attackers from fabrication of messages
between the two vehicles.
Receiver sends a message to check
validity, this message called Validity
Message, the message job is to indicate if
the sender vehicle has a VC or not.
Afterwards, RSU reports to the rec that the
117

International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 115-121
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)

sender has a VC, so receiver can consider
the information from the sender with no
fear.
In some situations, receiver receives
several massages, where all massages
agree on a same result and same data, but a
specific sender sends deferent data, this
data will be considered as wrong data, if
this data belongs to the same category.
Every message will be classified
depending on its category:
TABLE I. MESSAGE CLASSIFICATION AND
CODING

Every category has a code, if the message
received has the same code of the other
messages, and has a deferent data, then this
message is considered as a bogus message.
In this case rec sends an Abuse Report
(AR) for RSU, the Abuse AR (sen id,
Message Code, Time of Receive), this
report will be forwarded to CA, if RSU
receives the same AR from other vehicles
located in the same area, the number of
abuse Report messages depends on the
vehicles density on the road, see figure 3.

Figure 3. Calculation of the Number of Vehicles in
the Range [12]. If the number of vehicles that
making accusation for a specific vehicle is
near the half of the current vehicles, RSU
will make a Revocation Request (RR) to
revoke the VC from the sender vehicle.
Some vehicles dont produce an AR
because they didnt receive any data from
the sender vehicle (maybe they werent in
the area wile broadcasting), or they have a
problem in their devices, or they have an
IC, so RSU will not consider their
messages.
CA makes a revocation order to RSU after
confirming the RR and updates the CRL
and then RSU revokes the VC from the
sender vehicle, and assigns IC for it, to
indicate to other vehicles in the future, that
this vehicle broadcasts wrong data, "dont
trust it".
Figure 2 shows certificate revocation steps.
Message 1: sen (sender) sends a message
to the rec (receiver), this message along
with digital signature of sen, and this
message is encrypted with the Primary Key
(PK) of rec.
Any attacker can make a fabricated
message telling rec that this message
originated from sen, to prevent this
signature from being used.
Message 2: rec sends a request to RSU
encrypted with the PK of RSU, acquiring a
SK for securing connection.
Message 3: replay for Message 2, contains
the SK and the time for sending the replay,
the importance of the time is to prevent
replay attack, where an attacker can send
this message more than once, with the
same session key, and same signature, so
he can forge the whole connection.
Message 4: rec sends validity message to
check if the vehicle has to be avoided or
not, this message encrypted with the shared
SK obtained from RSU.
118

International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 115-121
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)

Message 7: sen sends a message to rec
containing the VC, to report for rec that
this vehicle must be trusted, and the time
of sending, in here, to avoid reply attack,
which happens when an attacker keeps the
message with him, and sends it after a
period, may be at that time, the senders
certificate been revoked by RSU, so the
sen must be avoided, but the attacker force
the rec vehicle to trust it. After receiving
the information, rec checks if the message
has a deferent or same data for the same
category of other messages received.
Message 8: if the message is deferent, then,
wrong data is received, rec sends an Abuse
Report for RSU, contains sen id to know
which vehicle made the problem, Message
Code to know the category of the message,
Time of Receive to know when the
message received, and the message also
includes the Time to avoid replay attack
and Signature to avoid fabrication; the
message is encrypted with PK of RSU.
In this situation replay attack will happen,
if an attacker copied this message, and
sends it frequently to RSU in several times
to make sure that the number of accusation
reached a level, that the certificate must be
revoked.
After examining the number of vehicles
that accused sen for sending an Invalid
message, if the number is reasonable, RSU
sends Message 9.
Message 9: RSU sends RR for CA,
containing Serial Number and Time to
avoid replay attack and Signature to avoid
fabrication, Revocation Reason to state
what is the reason for revocation, and sen
id to know which vehicle is the
problematic one and message code to
know what is the message category; the
message is encrypted with PK of CA.
Replay attack in this situation happens
when an attacker wants to transmit the
same message for CA claiming that this
message is from RSU, after some time CA
will not have the ability to respond,
causing for DoS attack, so RSU must use
Time and Serial number for this message,
because CA has a lot of work to do and
sending a lot of these kind of messages
will cause a problem.
Message 10: CA makes a Revocation
Order for RSU; this message contains SN
to avoid DoS Attack, time to avoid replay
attack, signature to avoid fabrication
attack, Sender Id, Revocation Reason to
state what is the reason for revocation.
After receiving this request CA will update
CRL, adding the new vehicle that been
captured to CRL and send it for RSU.
DoS attack can happen, when attacker keep
sending the same message to RSU,
claiming that the message originated from
CA, CA messages have the highest priority
to be processed by RSU, so RSU will
receive a huge amount of messages from
CA and process it, without having the time
to communicate with other RSUs or other
vehicles, to avoid it a serial number and
signature is used.
Message 11: RSU makes the revocation,
revoking VC, assigning IC, also this
message contains the time to avoid replay
attack, Signature to avoid fabrication
attack, Revocation Reason to state what is
the reason for revocation.
However, RSU will be responsible for
renewing vehicle certificates, any vehicle
has an expiring certificate will
communicate with RSU to renew the
certificate, then the RSU will check the
CRL to see if this vehicle has an IC or not.
If there is no problem for giving a new
certificate for this vehicle, it will be given
for a specific life time, when the period
expires vehicle will issue a request for the
CA for renewing the certificate. VC will
have a special design different from the
design of X.509 certificate [13] as shown
in [1].
119

International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 115-121
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)

4. DISCUSION

Frequent adversary warning
broadcasting will increase the load in the
channel and make the channel busy. It
should be noticed that an adversary may
send a frequent AR just to make the whole
network (vehicles and RSUs) busy with
accusations analysis.
The idea of using CRL limits the warning
broadcasting, but still sends large size
messages about the adversaries in the
whole world repeatedly every 0.3 second.
To solve the mentioned problems a new
adversary list will be created containing
the local road adversary IC's by the
following steps.
In this mechanism, all vehicles will be
provided with LRL containing the
information about all the adversaries in the
current road, this LRL is received by
nearest RSU to vehicle located on the road.
When any vehicle discovers an adversary,
it will search for its certificate in its local
LRL, if it is there, vehicle will move the
adversary ID to the top of the list to make
future search faster, in contrary, if the IC is
not in LRL, vehicle will send report
informing the nearest RSU about this
adversary presence.
When RSU receives a report from road
vehicle reporting about an adversary, it
checks for the senders certificate if it is
valid or not, if it is valid it will check if the
adversary IC in its LRL, if not it will add it
to the LRL, the updated LRL will be
broadcasted every 0.3 second like CRL
timing [2] to all the vehicles inside the
road. The RSUs in the road will receive the
LRL broadcasting with a flag pointing to
the added vehicle in the list to inform other
RSUs to add this IC to their list.
Each RSU monitors the road for incoming
and outgoing vehicles [8], if the adversary
vehicle entered the road an add flag
containing the adversary IC for the rest of
the RSUs will be broadcasted to add it to
its personal LRL, in contrary, if the
adversary left the road, a remove flag for
the adversary IC will be broadcasted to the
RSUs in the road.
In this way, the LRL will stay local only
for the current road; the size will be too
small. See table 2 for LRL which contains
the ID of the adversary and the serial
number of the IC certificate.
TABLE II. LRL STRUCTURE.
Vehicle ID IC Serial

5. CONCLUSION

The previous mechanisms proposed in [1]
achieved secure certificate revocation,
which is considered among the most
challenging design objective in vehicular
ad hoc networks, furthermore it helped
vehicles to easily identify the adversary
vehicle and made the certificate revocation
for better certificate management.
However, Frequent adversary warning
broadcasting will increase the load in the
channel and makes the channel busy, to
solve this problem, a new mechanism were
proposed in this paper by replacing the
active warning broadcasting with
reasonable broadcasting frequency of local
revocation list containing the ICs of all the
adversary vehicles on the current road, this
reduces the load on the channel resulted
from AR broadcasting proposed in [1].

6. References

1. Samara, G. and W.A.H. Al-Salihy, A New
Security Mechanism for Vehicular
Communication Networks. Proceeding of the
International Conference on Cyber Security,
CyberWarfare and Digital Forensic
(CyberSec2012), Kuala Lumpur, Malaysia. P.
18 22, IEEE.
120

International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 115-121
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)

2. DSRC Home Page. [cited 2011-11-21;
Available from:
http://www.leearmstrong.com/DSRC/DSRCH
omeset.htm.
3. Raya, M., et al., Eviction of misbehaving and
faulty nodes in vehicular networks. IEEE
Journal on Selected Areas in Communications,
2007. 25(8): p. 1557-1568.
4. Raya, M. and J.P. Hubaux. The security of
vehicular ad hoc networks. Proceedings of the
3rd ACM workshop on Security of ad hoc and
sensor networks, 2005, ACM.
5. Parno, B. and A. Perrig. Challenges in
securing vehicular networks. in Proceedings of
the Fourth Workshop on Hot Topics in
Networks (HotNets-IV). 2005.
6. Samara, G., W.A.H. Al-Salihy, and R. Sures.
Security Issues and Challenges of Vehicular
Ad Hoc Networks (VANET). in 4th
International Conference on New Trends in
Information Science and Service Science
(NISS), 2010 . IEEE.
7. Samara, G., W.A.H. Al-Salihy, and R. Sures.
Security Analysis of Vehicular Ad Hoc
Nerworks (VANET). in Second International
Conference on Network Applications
Protocols and Services (NETAPPS), 2010.
IEEE.
8. Raya, M., D. Jungels, and P. Papadimitratos,
Certificate revocation in vehicular networks.
Laboratory for computer Communications and
Applications (LCA) School of Computer and
Communication Sciences, EPFL, Switzerland,
2006.
9. Samara, G., S. Ramadas, and W.A.H. Al-
Salihy, Design of Simple and Efficient
Revocation List Distribution in Urban Areas
for VANET's. International Journal of
Computer Science, 2010. 8.
10. Laberteaux, K.P., J.J. Haas, and Y.C. Hu.
Security certificate revocation list distribution
for VANET. in Proceedings of the fifth ACM
international workshop on VehiculAr Inter-
NETworking 2008. ACM.
11. Lin, X., et al., Security in vehicular ad hoc
networks. Communications Magazine, IEEE,
2008. 46(4): p. 88-95.
12. Raya, M. and J.P. Hubaux, Securing vehicular
ad hoc networks. Journal of Computer
Security, 2007. 15(1): p. 39-68.
13. Stallings, W., Cryptography and network
security, principles and practices, 2003.
Practice Hall.
121
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 122-129
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)

*
Institute of Media Content, Dankook University
152, Jukjeon-ro, Suji-gu, Yongin, Gyeonggi-do, 448-701, Korea
**
Dept. of Computer Engineering, Kumoh National Institute of Technology
61, Daehak-ro, Gumi, Gyeongbuk, 730-701, Korea
nirkim@gmail.com
*
, jcjeon@kumoh.ac.kr(corresponding author)
**


ABSTRACT


KEYWORDS

cellular array, finite field, semi-systolic
structure, Montgomery multiplication,
arithmetic architecture

1 INTRODUCTION

Finite field arithmetic operations,
especially for the binary field GF(2
m
),
have been widely used in the areas of
data communication and network
security applications such as error-
correcting codes [1,2] and cryptosystems
such as ECC(Elliptic Curve
Cryptosystem) [3,4]. The finite field
multiplication is the most frequently
studied. This is because the time-
consuming operations such as
exponentiation, division, and
multiplicative inversion can be
decomposed into repeated
multiplications. Thus, the fast
multiplication architecture with low
complexity is needed to design dedicated
high-speed circuits.

Certainly, one of most interesting and
useful advances in this realm has been
the Montgomery multiplication
algorithm, introduced by Montgomery
[5] for fast modular integer
multiplication. The multiplication was
successfully adapted to finite field
GF(2
m
) by Koc and Acar [6]. They have
proposed three Montgomery
multiplication algorithms for bit-serial,
digit-serial, and bit-parallel
multiplication. They have chosen the
Montgomery factor, R=x
m
for efficient
implementation of the multiplication in
hardware and software.

Wu [7] has chosen a new Montgomery
factor and shown that choosing the
middle term of the irreducible trinomial
G()=
m
+
k
+1 as the Montgomery
factor, i.e., R=x
k
, results in more efficient
bit-parallel architectures. In [8], MM is
implemented using systolic arrays for
all-one polynomials and trinomials. Chiu
et al. [9] proposed semi-systolic array
structure for MM which uses R=x
m
.
Hariri and Reyhani-Masoleh [10]
proposed a number of bit-serial and bit-
parallel Montgomery multipliers and
showed that MM can accelerate the ECC
scalar multiplication. Recently, in [11],
they have considered concurrent error
detection for MM over binary field.
122
Finite Field Arithmetic Architecture Based on Cellular Array
Kee-Won Kim
*
and Jun-Cheol Jeon
**

Recently, various finite field arithmetic
structures are introduced for VLSI circuit
implementation on cryptosystems and error
correcting codes. In this study, we present
an efficient finite field arithmetic
architecture based on cellular semi-systolic
array for Montgomery multiplication by
choosing a proper Montgomery factor which
is highly suitable for the design on parallel
structures. Therefore, our architecture has
reduced a time complexity by 50%
compared to typical architecture.
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 122-129
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)

Three different multipliers, namely the
bit-serial, digit-serial, and bit-parallel
multipliers, have been considered and
the concurrent error detection scheme
has been derived and implemented for
each of them.

Chiou [12] used the recomputing with
shifted operands (RESO) to provide a
concurrent error detection method for
polynomial basis multipliers using an
irreducible all-one polynomial, which is
a special case of a general polynomial.
Lee et al. [13] described a concurrent
error detection (CED) method for a
polynomial multiplier with an
irreducible general polynomial. Chiou et
al. [9] also developed a Montgomery
multiplier with concurrent error
detection capability. Bayat-Sarmadi and
Hasan [14] proposed semi-systolic
multipliers for various bases, such as the
polynomial, dual, type I and type II
optimal normal bases. They have also
presented semi-systolic multipliers with
CED using RESO.

Recently, Huang et al. [15] proposed the
semi-systolic polynomial basis
multiplier over GF(2
m
) to reduce both
space and time complexities. Also they
proposed the semi-systolic polynomial
basis multipliers with concurrent error
detection and correction capability.
Various approaches adopt semi-systolic
architectures to reduce the total number
of latches and computation latency
because of permitting the broadcast
signals. However, almost existing
polynomial multipliers suffer from
several shortcomings, including large
time and/or hardware overhead, and low
performance.

In this paper, we consider the
shortcomings that the typical
architectures have, and propose a semi-
systolic Montgomery multiplier with a
new Montgomery factor. We show that
an efficient multiplication architecture
can be obtained by choosing a proper
Montgomery factor, and reduces time
complexity.

The remainder of this paper is organized
as follows. Section 2 introduces
Montgomery multiplication over finite
fields. In Section 3, we propose a
Montgomery multiplication architecture
based on our algorithm which is highly
optimized for hardware implementation.
In Section 4, we analyze and compare
our architecture with recent study.
Finally, Section 5 gives our conclusion.

2 MONTGOMERY
MULTIPLICATION ON FINITE
FIELDS

GF(2
m
) is a kind of finite field [16] that
contains 2
m
different elements. This
finite field is an extension of GF(2) and
any A e GF(2
m
) can be represented as a
polynomial of degree m1 over GF(2),
such as

0 1
1
1
a x a x a A
m
m
+ + + =

,
where a
i
e{0,1}, 0 s i s m1.

Let x be a root of the polynomial, then
the irreducible polynomial G is
represented as a following equation.

0 1
g x g x g G
m
m
+ + + = , (1)
where g
i
e GF(2), 0 s i s m1.

Let o and | be two elements of GF(2
m
),
then we define = o| mod G. Also, let
A and B be two Montgomery residues,
then they are defined as A = oR mod G
and B =|R mod G, where GCD(R,G) =
123
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 122-129
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)

1. Then, the Montgomery multiplication
algorithm over GF(2
m
) can be
formulated as

, mod
1
G R B A P

=
where R
1
is the inverse of R modulo G,
and RR
1
+GG'=1 [17]. Thus, by the
definition of the Montgomery residue,
the equation can be expressed as
follows.

G R
G R R R P
mod
mod ) ( ) (
1
=
=

| o


It means that P is the Montgomery
residue of . This makes it possible to
convert the operands to Montgomery
residues once at the beginning, and then,
do several consecutive multiplications/
squarings, and convert the final result to
the original representation. The final
conversion is a multiplication by R
1
, i.e.,
= PR
1
mod G. The polynomial R
plays an important role in the complexity
of the algorithm as we need to do
modulo R multiplication and a final
division by R.

3 PROPOSED ARCHITECTURE

This section describes the proposed
Montgomery multiplication algorithm
and architecture.

3.1 Proposed Algorithm

Based on the property of parallel
architecture, we choose the Montgomery
factor,

2 / m
x R = . Then, the
Montgomery multiplication over GF(2
m
)
can be formulated as


G x B A P
m
mod
2 /
= (2)

We know that x is a root of G and
m
g
and
0
g have always 1 over all
irreducible polynomials. Thus, the
equations can be rewritten as follows.

1
mod
1
1
1
+ + + =

x g x g
G x
m
m
m

(3)
1
2
1
1
1
mod
g x g x
G x
m
m
m
+ + + =

(4)

Meanwhile, (2) is represented by
substituting A and B as follows.




( (
]
[
mod
1 2 /
1
2 2 /
2
1 2 / 2 /
2 /
0
1 2 /
1
2
2 2 /
1
1 2 /

+
+

+ +
+ + +
+ +
+ + =
m
m
m
m
m m
m m
m m
Ax b Ax b
Ax b A b
Ax b Ax b
Ax b Ax b
G P

(5)

Now, it expresses that P can be divided
into two parts. One is based on the
negative powers of x and the other is
based on the positive powers of x. (5)
can be denoted by P = C+D, where



, mod ]
[
2 /
0
1 2 /
1
2
2 2 /
1
1 2 /
G Ax b Ax b
Ax b Ax b C
m m
m m
+

+
+ + + =


( (
. mod ]
[
1 2 /
1
2 2 /
2
1 2 / 2 /
G Ax b Ax b
Ax b A b D
m
m
m
m
m m

+
+
+ + + =

Meanwhile, let
) (i
A and
) (i
A be
G Ax
i
mod

and G Ax
i
mod ,
respectively. Then, based on (3) and (4),
the equations can be expressed as

124
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 122-129
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)

, ) (
) (
mod )
(
mod
1 ) 1 (
0
2
1
) 1 (
0
) 1 (
1
1
) 1 (
0
) 1 (
1
1 ) 1 (
1
2 ) 1 (
2
) 1 (
1
) 1 (
0
1
) 1 ( 1 ) (



+ + +
+ + =
+
+ + + =
=
m i m
m
i i
m
i i
m i
m
m i
m
i i
i i
x a x g a a
g a a
G x a x a
x a a x
G A x A


, ) (
) (
mod )
(
mod
1
1
) 1 (
1
) 1 (
2
1
) 1 (
1
) 1 (
0
) 1 (
1
1 ) 1 (
1
2 ) 1 (
2
) 1 (
1
) 1 (
0
) 1 ( ) (

+ +
+ + + =
+
+ + + =
=
m
m
i
m
i
m
i
m
i i
m
m i
m
m i
m
i i
i i
x g a a
x g a a a
G x a x a
x a a x
G xA A



where

=
s s +
=

+

+
, 1 ,
2 0 ,
) 1 (
0
1
) 1 (
0
) 1 (
1
) (
m j a
m j g a a
a
i
j
i i
j
i
j
(6)

=
s s +
=

0 ,
1 1 ,
) 1 (
1
) 1 (
1
) 1 (
1
) (
j a
m j g a a
a
i
m
j
i
m
i
j
i
j
(7)

Also, using the formulae of
) (i
A and
) (i
A , the terms C and D are represented
as follows.





) 2 / (
0
) 1 2 / (
1
) 2 (
2 2 /
) 1 (
1 2 /
) 0 (
1
1 2 /
2
2 2 /
1 2 /
1
2 /
0
]
[
mod
m m
m m
m m
m m
A b A b
A b A b A z
Ax b Ax b
Ax b Ax b
G C
+ + +
+ + =
+
+ + + =

(8)

( (

( (
,
]
[
mod
) 1 2 / (
1
) 2 2 / (
2
) 1 (
1 2 /
) 0 (
2 /
1 2 /
1
2 2 /
2
1 2 / 2 /

+
+
+ + + =
+
+ + + =
m
m
m
m
m m
m
m
m
m
m m
A b A b
A b A b
Ax b Ax b
Ax b A b
G D

(9)
where 0 = z .

The coefficients of C and D are
produced by summing the corresponding
coefficients of each term in (8) and (9),
respectively. It means that c
j
and d
j
, for 0
s j s m1 are represented as



) 2 / (
0
) 1 2 / (
1
) 2 (
2 2 /
) 1 (
1 2 /
) 0 (
m
j
m
j
j m j m j j
a b a b
a b a b a z c
+ + + +
+ + =



Algorithm 1. COM_C(A,B,G)
Input:
) , , , , (
0 1 2 1
a a a a A
m m


= ,


) , , , , ( '
0 1 2 2 / 1 2 /
b b b b B
m m


= ,
) , , , , (
0 1 2 1
g g g g G
m m


=
Output:


G Ax b Ax b
Ax b Ax b C
m m
m m
mod ]
[
1
1 2 /
2
2 2 /
1 2 /
1
2 /
0

+
+ +
+ + =

;
) 0 (
j j
a a ; 0
) 0 (

j
c 0 = z ;
for 1 = i to

1 2 / + m do
for 0 = j to 1 m in parallel do
if (j = 0) then /* 0 = j */
) 1 (
0
) (
1

=
i i
m
a a ;

) 1 (
0 1 2 /
) 1 (
0
) (
0

+

+ =
i
i m
i i
a b c c
(or
) 0 (
0
) (
0
) 1 (
0
a z c c
i
+ = if 0 = i );
else /* 1 , 2 , , 2 , 1 = m m j */
j m
i i
j m
i
j m
g a a a



+ =
) 1 (
0
) 1 ( ) (
1
;

) 1 (
1 2 /
) 1 ( ) (
+


+ =
i
j m i m
i
j m
i
j m
a b c c
(or
) 1 ( ) 1 ( ) (


+ =
i
j m
i
j m
i
j m
a z c c if
0 = i );
end if
end for
end for
return C


125
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 122-129
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)


( (
.
) 1 2 / (
1
) 2 2 / (
2
) 1 (
1 2 /
) 0 (
2 /

+
+ + +
+ =
m
j m
m
j m
j m j m j
a b a b
a b a b d


Now, we obtain the following recurrence
equations from the above equations.


+ s < +
= +
=

+


, 1 2 / 1 ,
1 ,
) 1 (
1 2 /
) 1 (
) 1 ( ) 1 (
) (
m i a b c
i a z c
c
i
j i m
i
j
i
j
i
j
i
j


where 0
) 0 (
=
j
c for 1 0 s s m j and
0 = z , and

) 1 (
1 2 /
) 1 ( ) (
+

+ =
i
j i m
i
j
i
j
a b d d
(
2 / 1 , m i s s
where 0
) 0 (
=
j
d for 1 0 s s m j .

Algorithm 2. COM_D(A,B,G)
Input:
) , , , , (
0 1 2 1
a a a a A
m m


= ,


) , , , , ( "
1 2 1 2 / 2 / +
=
m m m m
b b b b B ,
) , , , , (
0 1 2 1
g g g g G
m m


=
Output:

( (
G Ax b Ax b
Ax b A b D
m
m
m
m
m m
mod ]
[
1 2 /
1
2 2 /
2
1 2 / 2 /

+
+ +
+ + =

;
) 0 (
j j
a a ; 0
) 0 (

j
d
for 1 = i to
(
2 / m do
for 0 = j to 1 m in parallel do
if (j=0) then /* 0 = j */
) 1 (
1
) (
0

=
i
m
i
a a ;

) 1 (
1 1 2 /
) 1 (
1
) (
1

+


+ =
i
m i m
i
m
i
m
a b d d ;
else /* 1 , 2 , , 2 , 1 = m m j */
j
i
m
i
j
i
j
g a a a
) 1 (
1
) 1 (
1
) (

+ = ;

) 1 (
1 1 2 /
) 1 (
1
) (
1

+


+ =
i
j i m
i
j
i
j
a b d d ;
end if
end for
end for
return D

As shown in Algorithm 1 and 2, the
parallel computational algorithms for C
and D are driven by the above equations.
The proposed COM_C(A,B,G) and
COM_D(A,B,G) algorithms can be
executed simultaneously since there is
no data dependency between computing
C and D.

3.2 Proposed Multiplier

Based on the proposed algorithms, the
hardware architecture of the proposed
semi-systolic Montgomery multiplier is
shown in Figure 1. The upper, lower,
middle part of the array computes C, D,
and C+D, respectively. Our architecture
is composed of

1 2 / + m
) (
0
U

i
cells,

) 1 2 / ( ) 2 ( + m m
) (
U
i
j
cells,
(
2 / m
) (
0
V

i
cells,
(
2 / ) 2 ( m m
) (
V
i
j
cells,
and one S cell.

1 m
b
2 m
b
1 2 / +i m
b
1 2 / + m
b
2 / m
b
2
c
1 m
c
j m
c
1
c
0
c
1 m
a
0
a
1 m
d
1 j
d 0
d
0
p
1
p
j
p
2 m
p
1 m
p
S
1 m
a
1 m
g
2
g
0 1
a
0 2
a 1
g
j m
g
j m
a
0 0 0
1 m
g
2 m
a
2 m
g
1 j
a j
g
3 m
a
1
g
0
a
) 0 (= z
1 2 / m
b
1 2 / + i m
b
0
b
1
b
(1)
1 -
U
m
(1)
2 -
U
m
(1)
U
j
(1)
1
U
(1)
0
U

) 2 (
1
U
) 2 (
2
U
m
) 2 (
U
j
(2)
1 -
U
m
(2)
0
U

) (
0
U
i ) (
2
U
i
m
) (
U
i
j
) (
1
U
i
m
) (
1
U
i
) /2 (
0
U
m ) /2 (
1 -
U
m
m
) /2 (
U
m
j
) /2 (
2
U
m
m
) /2 (
1
U
m
1) /2 (
0
U
+ m 1) /2 (
1 -
U
+ m
m
1) /2 (
U
+ m
j
1) /2 (
2
U
+

m
m
1) /2 (
1
U
+ m
2 m
d
3 m
d
0 0 0 0 0
(1)
0
V
(1)
1
V
(1)
V
j
(1)
1
V
m
(1)
2
V
m
(2)
0
V
(2)
1
V
(2)
V
j
(2)
1
V
m
(2)
2
V
m
) (
0
V
i ) (
1
V
i ) (
V
i
j
) (
1
V
i
m
) (
2
V
i
m
( ( ) 1 2 /
1
V
m ( ( ) 1 2 /
V
m
j
( ( ) 1 2 /
1
V

m
m
( ( ) 1 2 /
0
V
m ( ( ) 1 2 /
2
V

m
m
( ( ) 2 /
1
V
m ( ( ) 2 /
V
m
j
( ( ) 2 /
1
V
m
m
( ( ) 2 /
0
V
m ( ( ) 2 /
2
V
m
m

Figure 1. The proposed semi-systolic
Montgomery multiplier over GF(2
m
)

126
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 122-129
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)

The detailed circuits of the cells in
Figure 1 are depicted in Figure 2 thru
Figure 4, and , , and D denote XOR
gate, AND gate, and one-bit latch(flip-
flop), respectively.
) 1 (
0
i
c
D D
) (
0
i
c
) (
1
i
m
a

1 2 / + i m
b
) 1 (
0
i
a

(a)
) (
0
U

i

j m
g

) 1 (

i
j m
c
) 1 (

i
j m
a
) 1 (
0
i
a
D D D
) (i
j m
c

) (
1
i
j m
a

1 2 / + i m
b

(b)
) (
U
i
j

Figure 2. Circuit configuration of
) (
0
U

i
and
) (
U
i
j
cell
The latency of the proposed semi-
systolic multiplier requires m/2+1
clock cycles. Each clock cycle takes the
delay of one 2-input AND gate, one 2-
input XOR gate, and one 1-bit latch. The
space complexity of this multiplier
requires 2m
2
+m1 2-input AND gates,
2m
2
+2m1 2-input XOR gates, and
3m
2
+2m1(for odd m) or 3m
2
+3m1(for
even m) 1-bit latches.

Note that
) (
U
i
j
(
) (
0
U

i
) and
) (
V
i
j
(
) (
0
V

i
)
cells in Figure 2 and 3 are functionally
equivalent cells and the computations
can be executed in parallel, and the
computed results are added in S cell. In
Figure 4, D
*
denotes one bit latch when
m is even, otherwise it is ignored.
) 1 (
1

i
m
d
) 1 (
1

i
m
a
) (
1
i
m
d

) (
0
i
a
1 2 / +i m
b
D D

(a)
) (
0
V

i

1 2 / +i m
b
) 1 (
1

i
j
d
) 1 (
1

i
j
a
) 1 (
1

i
m
a
) (
1
i
j
d

) (i
j
a
D D D
j
g

(b)
) (
V
i
j

Figure 3. Circuit configuration of
) (
0
V

i
and
) (
V
i
j
cell

4 COMPLEXITY ANALYSIS

In CMOS VLSI technology, each gate is
composed of several transistors [18]. We
adopt that A
AND2
= 6, A
XOR2
= 6, and
A
LATCH1
= 8, where A
GATEn
denotes
transistor count of an n-input gate,
respectively. Also, for a further
comparison of time complexity, we
adopt the practical integrated circuits in
[19] and the following assumptions, as
discussed in detail in [15], are made:
T
AND2
= 7, T
XOR2
= 12, and T
LATCH1
= 13,
127
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 122-129
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)

where T
GATEn
denotes the propagation
delay of an i-input gate, respectively.

2 m
c
1 m
c
j
c
1
c
0
c
1
d
1 m
d
j
d
0
d
0
p
1
p
j
p
2 m
p
1 m
p
2 m
d

D
*
D
*
D
*
D
*
D
*

Figure 4. Circuit configuration of S cell

Table 1. Comparison of semi-systolic
polynomial basis architectures
gate/delay
In
[15]
Fig. 1
even m/odd m
Number of
cells
2
m
U

:

1 2 / + m

U :

) 1 2 / ( ) 2 ( + m m
V

:
(
2 / m

V:
(
2 / ) 2 ( m m

S :1
2-input AND
2
2m 1 2
2
+m m
2-input XOR
2
2m 1 2 2
2
+ m m
3-input XOR 0 0
one-bit latch
2
3m
1 3 3
2
+ m m /
1 2 3
2
+ m m
Total
transistor
count
2
48m
20 42 48
2
+ m m /
20 34 48
2
+ m m
Cell delay(ns) 32 32
Latency m 1 5 . 0 + m / 5 . 0 5 . 0 + m
Total
delay(ns)
m 32 32 16 + m / 16 16 + m

A circuit comparison between the
proposed multiplier and the related
multiplier is given in Table 1. Although
the proposed multiplier has nearly the
same space complexity compared to
Huang et al.[15], the time complexity is
approximately reduced by 50%.

5 CONCLUSION

In this paper, we propose a cellular semi-
systolic architecture for Montgomery
multiplication over finite fields. We
choose a novel Montgomery factor
which is highly suitable for the design of
parallel structures. We also divided our
architecture into three parts, and
computed two parts of them in parallel
so that we reduced the time complexity
by nearly 50% compared to the recent
study in spite of maintaining similar
space complexity. We expect that our
architecture can be efficiently used for
various applications, which demand
high-speed computation, based on
arithmetic operations.

6 ACKNOWLEDGMENT

This research was supported by Basic
Science Research Program through the
National Research Foundation of
Korea(NRF) funded by the Ministry of
Education, Science and Technology
(2011-0014977).

7 REFERENCES

1. W. W. Peterson, and E. J. Weldon, Error-
Correcting Codes, MIT Press, Cambridge
(1972).
2. R. E. Blahut. Theory and Practice of Error
Control Codes, Addison-Wesley, Reading
(1983).
3. W. Diffie and M. E. Hellman, New
directions in cryptography, IEEE
Transactions on Information Theory, vol.
22, no. 6, pp. 644-654 (1976).
4. B. Schneier, Applied Cryptography, John
Wiley & Sons press, 2nd edition (1996).
128
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 122-129
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)

5. P. Montgomery, Modular Multiplication
without Trial Division, Mathematics of
Computation, vol. 44, no. 170, pp. 519521
(1985).
6. C. Koc and T. Acar, Montgomery
Multiplication in GF(2
k
), Designs, Codes
and Cryptography, vol. 14, no. 1, pp. 5769
(1998).
7. H. Wu, Montgomery Multiplier and
Squarer for a Class of Finite Fields, IEEE
Trans. Computers, vol. 51, no. 5, pp. 521-
529 (2002).
8. C. Y. Lee, J. S. Horng, I. C. Jou and E. H.
Lu, Low-Complexity Bit-Parallel Systolic
Montgomery Multipliers for Special Classes
of GF(2
m
), IEEE Transactions on
Computers, vol. 54, no. 9, pp. 10611070
(2005).
9. C. W. Chiou, C. Y. Lee, A. W. Deng and J.
M. Lin, Concurrent Error Detection in
Montgomery Multiplication over GF(2
m
),
IEICE Trans. Fundamentals of Electronics,
Communications and Computer Sciences,
vol. E89-A, no. 2, pp. 566-574, 2006.
10. A. Hariri and A. Reyhani-Masoleh, Bit-
Serial and Bit-Parallel Montgomery
Multiplication and Squaring over GF(2
m
),
IEEE Trans. Computers, vol. 58, no. 10, pp.
1332-1345 (2009).
11. A. Hariri and A. Reyhani-Masoleh,
Concurrent Error Detection in Montgomery
Multiplication over Binary Extension
Fields, IEEE Trans. Computers, vol. 60, no.
9, pp. 1341-1353 (2011).
12. C. W. Chiou, Concurrent Error Detection
in Array Multipliers for GF(2
m
) Fields, IEE
Electronics Letters, vol. 38, no. 14, pp. 688
689 (2002).
13. C. Y. Lee, C. W. Chiou, and J. M. Lin,
Concurrent Error Detection in a
Polynomial Basis Multiplier over GF(2
m
),
J. Electronic Testing: Theory and
Applications, vol. 22, no. 2, pp. 143-150
(2006).
14. S. Bayat-Sarmadi and M.A. Hasan,
Concurrent Error Detection in Finite Field
Arithmetic Operations Using Pipelined and
Systolic Architectures, IEEE Trans.
Computers, vol. 58, no. 11, pp. 1553-1567
(2009).
15. W. T. Huang, C. H. Chang, C. W. Chiou
and F. H. Chou, Concurrent error detection
and correction in a polynomial basis
multiplier over GF(2
m
), IET Information
Security, vol. 4, no. 3, pp. 111-124 (2010).
16. R. Lidl and H. Niederreiter, Introduction to
Finite Fields and Their Applications,
Cambridge Univ. Press (1986).
17. J. C. Jeon and K. Y. Yoo, Montgomery
exponent architecture based on
programmable cellular automata,
Mathematics and Computers in Simulation,
vol. 79, pp. 1189-1196 (2008).
18. N. Weste, K. Eshraghian, Principles of
CMOS VLSI design: a system perspective,
Addison-Wesley, Reading, MA (1985).
19. STMicroelectronics, Available at
http://www.st.com/
129
Mohammad Mahboubian, Nur Izura Udzir, Shamala Subramaniam, Nor Asila Wati Abdul
Hamid
mahboubian.uni@gmail.com, {izura, shamala, asila}@fsktm.upm.edu.my
Faculty of Computer Science and Information Technology
University Putra Malaysia, Serdang, Selangor, Malaysia
Abstract- One of the most important topics
in the field of intrusion detection systems is
to find a solution to reduce the
overwhelming alerts generated by IDSs in
the network. Inspired by danger theory
which is one of the most important theories
in artificial immune system (AIS) we
proposed a complementary subsystem for
IDS which can be integrated into any
existing IDS models to aggregate the alerts
in order to reduce them, and subsequently
reduce false alarms among the alerts. After
evaluation using different datasets and
attack scenarios and also different set of
rules, in best case our model managed to
aggregate the alerts by the average rate of
97.5 percent.
KeywordsIntrusion detection system; Alert
fusion; Alert correlation, Artificial Immune
system; Danger theory;
1.0 Introduction
In recent years intrusion detection systems
(IDS) have been widely adopted in
computer networks as a must-have
appliances to monitor the network and
look for malicious activities. It is possible
to use and implement them either in the
network level to monitor the activities in
the network or to use them in host level to
monitor activities on a particular machine
in the system. In both cases after detecting
a malicious activity they will send an alert
to the network administrator.
Each alert contains information about this
malicious activity such as source IP
address, source port number, destination IP
address, etc. Thus, for a single attack on a
network or any of its hosts, there will be
thousands of alerts generated and sent to
the network administrator. Also some of
these alerts may not be valid and are
generated because of the wrong detection
of an IDS (false positive) in the network.
This is crucial as every day a significant
number of alerts is generated and
processing these alerts for network
administrators can be a tedious task,
especially if all of these alerts are not valid
and can be result of false positive
detection. Therefore, in the last few years
one of the most focused topics in the field
of network security and more specifically
intrusion detection systems was to find
solutions for this problem.
To reduce the overwhelming amount of
generated alerts some researchers have
suggested to aggregate alerts into clusters,
which is also called alert fusion. The final
objective of aggregation is to group all
similar alerts together. During aggregation,
alerts are put into groups based on the
similarity of their corresponding features
[25] such as Source IP, Destination IP,
Source Port, Destination Port, Attack Class
and Timestamp. On the other hand some of
the researchers investigated different
approaches to correlate the attack
scenarios based on the alerts. Alert
correlation provides the network
administrator with a higher view of a multi
staged attack.
Three main approaches have been used in
the literature for correlating alerts in attack
scenarios.
In the first approach the relationship
between alerts are hardcoded in the
system. These methods are limited to the
An AIS Inspired Alert Reduction Model
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 130-139
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
130
predefined rules available in the
knowledge base of the system. In the
second approach and to overcome the
problem in the first approach, other
approaches have been suggested such as
machine learning and data mining
techniques to extract relationships between
alerts, but these approaches require a
lengthy initial period of training. In these
approaches, co-occurrence of alerts within
a predefined time window is used as an
important feature for the statistical analysis
of alerts. This involves pair-wise
comparison between alerts since every two
alerts might be similar and therefore can
be correlated [25]. But these repeated
comparisons between alerts leads to a very
huge computational overload, especially
when they are going to be used in large-
scale networks, in which we may expect
thousands of alerts per minute.
Finally, in the third approach, some of the
recent works focused on filtering and
omitting false positive alerts.
In this paper we proposed a new
aggregating method inspired by artificial
immune system and more specifically
danger theory which attempt to aggregate
the generated alerts based on the prediction
of attack scenarios. The proposed
algorithm is able to reduce alerts before
passing them to the network administrator
and also to remove false positives from the
generated alerts.
The remainder of this paper is organized as
follows: in Section 2 we present a brief
review of previous works in the literature.
In Section 3 we describe the proposed
model and discuss some of the aspects
related to alert aggregation. Section 4
presents experimental results and finally
we conclude this paper in Section 5.
Artificial Immune System:
Artificial Immune system is a
mathematical model based on the human
body defence system. Natural immune
system is a remarkable and complex
defence mechanism, and it protects the
organism from foreign invaders, such as
viruses. Therefore, it is vital for the
defence system to distinguish between
self-cells and other cells, as well as
ensuring that lymph cells does not show
any reaction against human body cells. To
achieve this, the human body will go
through a "Negative Selection" process
[16] in which T-cells that react against
self-proteins are destroyed therefore only
those cells that do not have any similarity
to self-proteins survive. These survived
cells which are now called matured T-cells
are ready to protect the body against
foreign antigens.
Danger Theory:
This theory was first proposed in 1994
[17] by Matzinger. According to this
theory not all foreign cells in our body
should be considered an antigen. For
instance the food which we eat is also a
foreign invader to our body but the
human body does not react to this foreign
invader.
Danger theory suggests that foreign
invaders, which are dangerous, will induce
the generation of danger signals by
initiating cellular stress or cell death [19].
Then these molecules are detected by
APCs, critical cells in the initiation of an
immune response, this leading to
protective immune defence system. In
general there are two types of danger
signals; in the first category the danger
signals are generated by the body itself,
and in the second category, the danger
signals are derived from invading
organisms, e.g. bacteria [20].
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 130-139
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
131
2.0 Related Works
Recently, there have been several
proposals on alert fusion. Generally, each
method is to combine duplicated alerts
(alert which are very similar to each other)
from the same or different sensors to
reduce a large part of alerts. Here we
overview some of the works which have
been done in the last few years.
To measure similarities between alerts, the
pioneers in the field of alert aggregation,
Valdes and Skinner [1] proposed a method
in which alerts are grouped into different
clusters based on their overall similarity,
determined based on their similarities on
the corresponding features. Unfortunately,
this method relies on expert knowledge to
determine the similarity degree between
attack classes.
In [2], the authors presented an algorithm
to fuse multiple heterogeneous alerts to
create scenarios, building scenarios by
adding the alert to the most likely scenario.
To do so it computes the probability that a
new alert belongs to one of the existing
scenarios.
Ning et al. [3] constructed a series of
prerequisites and consequences of the
intrusions. Then by developing a formal
model they managed to correlate related
alerts by matching the outcome of some
previously seen alerts and the precondition
of some later alerts. Julisch [4] used root
causes to solve the problem of the alert
attribute similarity. Although this approach
was effective but finding root causes of the
alert attributes is very difficult and in large
networks seems to be impractical. Chung
et al. [5] uses Correlated Attack Modelling
Language (CAML) for modelling
multistep attack scenarios and then to
recognize attack scenarios he allowed the
correlation engines to process these
models. However, it is not easy for this
algorithm to model new variant attacks.
Valeur et al. [6] introduced a 10-step
Comprehensive IDS Alert-Correlation
(CIAC) system that uses exact feature
similarity in two out of ten steps in their
alert correlation system. Qin and Lee [7]
proposed a statistical-based correlation
algorithm to predict novel attack strategies.
This approach combines the correlation
based on Bayesian inference with a broad
range of indicators of attack impacts and
the correlation based on the Granger
Causality Test. However, this algorithm
cannot be used to predict complex multi
staged attacks because of high false
positive results.
In another work Qin and Lee [8] proposed
an approach which applies Bayesian
networks to IDS alerts in order to conduct
probabilistic inference of attack sequences
and to predict possible potential upcoming
attacks. In [9] authors introduced a bi-
directional and multi-host causality to
correlate distinct network and host IDS
alerts. But if the number of false positive
alerts increases mistakes in recognition
may occur. Zhu and Ghorbani [10] use the
probabilistic output from two different
neural network approaches, namely
Multilayer Perception (MLP) and Support
Vector Machine (SVM), to determine the
correlation between the current alert and
previous alerts. They used Alert
Correlation Matrix (ACM) to store
correlation level of any given two types of
alerts. Wang et al. [11] proposed a new
data mining algorithm to construct attack
scenarios. This algorithm allows multi-
stage attack behaviours to be recognized,
and it also predicts the potential attack
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 130-139
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
132
steps of the attacker. However, to calculate
the threshold used in this approach
sufficient training is required. To detect
DDoS attacks Lee [12] proposed clustering
analysis using the concept of entropy .He
then calculated the similarity value of
attack attributes between two alerts using
Euclidian distance. Fava et al. [13]
proposed a new approach based on
Variable Length Markov Models
(VLMM), which is a framework for the
characterization and prediction of cyber
attack behaviour. VLMM can predict the
occurrence of a new attack; however it
does not know what kind of attack it is.
Zhang et al. [14] uses the Forward and
Viterbi algorithm based on HMM to
recognize the attackers attack intention
and forecasts the next possible attack for
the multi-step attack. By the design of
Finite State Machine (FSM) for
forecasting attacks, the Forward algorithm
is used to determine the most possible
attacking scenario, and the Viterbi
algorithm is used to identify the attacker
intension. Du et al. [15] proposed two
ensemble approaches to project the likely
future targets of ongoing multi-stage
attacks instead of future attack stages.
3.0 Proposed Model
Figure 1 The proposed model
We assumed that all types of computer
attacks can be categorized into the
following general groups:
a) One-to-One: in which the hacker
attacks one of the machines on the
network. This can be a Probe or a Dos
attack or exploitation of services in that
host.
b) Many-to-One: in which many
machines (zombies) attack one of the
machines on the network. Most
probably this is a form of DDos attack.
c) One-to-Many: in which the hacker
attacks many machines on the network
such as probe attack.
According to danger theory only an alert
or group of alerts can be considered valid
(dangerous) if they initiate the danger
signal.
To raise the danger signal some conditions
must be satisfied and this conditions are
defined prior to implementation of this
system. Therefore we have a list of
condition in which if any of these
conditions is satisfied by a group of
alarms, that group of alarm is considered
dangerous and will be reported to the
network administrator immediately.
Not only the proposed model tries to
aggregate the alerts based on their
common features but it correlates the
attacks internally for better aggregating the
alerts.
Figure 1 shows our proposed model. This
model consists of six components and the
design of this model is in such a way that
any of these components can be replaced
with a new implementation of that
component depending on different network
situation.
Alert Collector
Alert Parser
Alert Filtration and Validation
Danger Signal Detection
Final Alert Preparation Module
Database
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 130-139
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
133
The following is an illustration of the main
components of this model:
a) Alert Collector module (CM): This
module is responsible to collect all
the alerts from all the IDS sensors
in the network. Therefore after
generation of an alert by an IDS
sensor, instead of sending the alert
directly to the administrator, it
should be sent to this module. Once
this module receives an alert, that
alert will be registered into model
to be processed. Another objective
of this module is to standardize the
alerts because IDS sensors might
generate the alerts in different
format. Therefore in order to
process and compare the received
alerts they should be in a same
format. Another point about this
module is that as this module
receives enormous volume of alerts
it must be implemented using a
very robust multi-threaded
software technology.
b) Alert Filtration and Validation
(FVM): One of the prerequisites of
using this model is to keep the list
of IP address and services running
on all of the machines in the
network under our administrative
territory. By utilizing this
information, this module filters out
those alerts which do not make
sense such as an alert of attack on a
web server on a machine without a
web server. Also this module
aggregates those alerts which are
exactly similar feature wise,
helping to reduce redundant alerts.
c) Alert Parser module (PM): The
main objective of this module is to
categorize and classify all validated
alerts into one of the groups we
mentioned earlier: one-to-one,
many-to-one and one-to-many.
d) Danger Signal Detection Module
(DSDM): This is the most
important module in this model.
This module is the implementation
of the one of the most famous
theories in the field of artificial
immune system namely known as
Danger Theory. Its main function
is to analyze all received alerts in a
specific time window in an attempt
to correlate a multi-steps attack and
aggregating all related alerts into a
group of alerts which later will be
represented to the administrator as
a single alert. In order to achieve to
this objective, a series of
generalized rules are hardcoded
into this module. Based on these
rules and the actual characteristic
of the available alerts this module
dynamically decides if a group of
alerts are related to an multi-steps
attack and can be aggregated to a
single alert.
e) Final Alert Preparation module
(FAPM): The results of previous
module are sent to this last module
in order to make them presentable
before passing to the administrator.
3.1 Model Implementation
The proposed model has a module namely
Danger Signal Detection Module (DSDM)
which decides if a group of alerts are likely
to raise the danger signal or not, and will
report a dangerous group of alerts to the
network administrator.
The steps to implement this model are:
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 130-139
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
134
a) First we provide this model with
information about the machines on the
network, such as their IP address, list
of services running on each machine
and, in case of host IDS the id of the
IDS, on that machine. This step should
be repeated periodically in order to
prevent concept drift.
b) Next, all alerts are grouped into one of
the groups explained earlier. The
priority is with the alerts which are
exactly similar in terms of their
features and after grouping these alerts
the priority is with the second type of
alerts namely one-to-many. This is
because before attacking a network the
attacker needs to know about the
machines on the network, so he/she
initiates a probe to the network, which
results in generating these types of
alerts. After this group the least
priorities belong to one- to-one and
many-to-one types of alerts. The
grouping is done within an adjustable
time window value and based on the
source IP address, destination IP
address, destination Port number,
timestamp and in case of host-based
IDS, the id of the IDS.
c) Then each group is checked to find out
if that group is capable of raising the
danger alarm (Danger Theory).
d) For each group which satisfies the
checking a record is registered in a
database for the purpose of keeping
track of the status of the attack as this
is one of the sources which can
indicate the existence of danger signal.
Finally an alert will be sent to the
network administrator containing the
information about the attack, as well as
all the machines IP addresses (source
and destination) or port numbers which
contribute to this alarm.
e) Alerts generated from network-based
IDSs and host-based IDs are grouped
separately but host-based IDSs alerts
are important in determining the
severity of network-based IDS alerts.
3.2 Danger Signal Detection Module
This module indicates either a group of
alerts are capable of raising the danger
alarm or not and this is done by defining a
list of rules. The following are some of the
most important rules in this model:
a) In general an existence of one-to-many
alert group (generated by network-
based IDS) in database followed by
one-to-one alert group type (generated
by host-based IDS) will raise the
danger alarm. This is because a hacker
first scans the machines on a network
and after he/she found a machine with
a particular service running on it,
he/she tries to exploit that service to
gain access to that machine.
b) If in the alert group the source IPs are
external and port number(s) are not
matched with actual services running
on the internal machines, this is an
indication of danger signal and will be
reported.
c) If in the alert group the source IPs are
internal and port number(s) are
matched with the actual services
running on the destination machine(s),
and the number of alerts in this group
are not more than a predefined value,
then this group is ignored.
d) If in the alert group there are more than
one source IPs and a single destination
IP, this will raise the danger alarm.
e) If in the alert group there are one single
source IP and more than one IPs in
destination IP this will raise the danger
alarm.
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 130-139
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
135
f) If in the alert group there are more one
source IP and one destination IP and
we have recently a record in the
database related to this source and IP
address (probe), then this will raise the
danger signal.
The similarity function S between two
given alerts and b is calculated as follow:
(
,
)
. ( ,
)
(1)
Whereby n is total number of features and
( ,
)
is the similarity of feature k
between these alerts which can be between
0 and 1, is the weight of that particular
feature such that

= 1 (2)
Having different weight for each feature
leads to more precise grouping of the
alerts. Among our features set source IP
address and timestamp have the highest
weights.
Therefore to calculate the similarity of two
signals we need to calculate the following:
(
, )
, (
, ),
(
, ),
(
, ),
and in term of host based IDS:
(
, )
After normalizing the formula in (1) the
similarity value between two alerts can be
between 0 and 1: 0 when two alerts are
completely different and 1 when two alerts
are identical.
4.0 Experimental Results
In order to evaluate our model first we
setup a network of seven computers in
which two computers play the role of
attackers and with a different class of IP
addresses so that they are considered as
external machines (Figure 2). Next, for
each machines inside the network we
configured different services such as file
server, web service, and remote desktop
service and so on. As for the IDS we used
our own proposed IDS in [21].
Table 1- Shows the services running on each
workstation
Next we simulated different kinds of
attacks in order to generate alerts and
starting with probe (including vertical and
horizontal port scans) and Dos attacks and
finally exploiting different services on the
workstations to gain access to the machine
and elevating the access level. Table 1
shows the services running on each of the
workstations.
The first attacker (10.8.1.100) starts with
scanning the whole range of network and
finding the running services on each of the
discovered workstations. Then he tries to
exploit different services on different
workstation one by one. At the same time
the second attacker (10.8.1.200) scans the
whole network and initiates a Dos attack
against one of the discovered machines.
These activities caused the IDS to generate
more than 3000 alerts. These alerts was
processed by this model and the final
number of alerts was 31 therefore the
proposed model showed a very good
performance of 98.95% alerts reduction..
Workstation Service(s)
10.8.0.2 ftp (port 21)
10.8.0.3 Web server (port 80)
10.8.0.4 smtp (25) and imap (143)
10.8.0.5 RDP (port 3389)
10.8.0.6 SSH (port 22)
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 130-139
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
136
Figure 2 The network setup for the first
experiment
To better evaluate our proposed model we
considered LLDOS1.0 and LLDOS2.0
attack scenarios of DARPA 2000 [16] as
test datasets. These datasets contain a large
number of normal data and attack data,
well-known among IDS researchers [22,
23]. For this experiment we used a partial
of these datasets. In order to simulate the
networks we used NetPoke from DARPA
to replay datasets and once again we used
our own developed IDS for attack
detection and also for generating alerts.
Total number of 12068 alerts was
generated by our IDS. Then we updated
the model with the services running on
each of the machines in these networks.
Finally we run these experiments multiple
times and each time with a different set of
rules in Danger Signal Detection
Module. In all cases we make sure that
these rules are enough generalized so that
they can be utilized in other networks also
therefore they are not crafted only for
these experiments.
The following tables show the reduction
percentage of each level of our model for
the worse and best cases that we achieved.
These results show that it is possible for
this model to achieve the alerts reduction
rate of 98.5% for LLDOS1.0, and 97.02%
for LLDOS2.0 if we use the correct rules
set in this model. Some of the modules are
not meant for alert reduction and they
mostly handle other issues such as parsing
the incoming alerts or rearranging of alerts
to make them more presentable for end
user which in this case it is network admin.
Table 2- LLDOS1.0 worse case result
FVM PM DSDM FAPM SUM
Input 7054 4901 4893 1951 7054
Output 4901 4893 1951 1945 1945
%
60.13 72.4
Table 3- LLDOS1.0 best case result after updating
the rules
FVM PM DSDM FAPM SUM
Input 7054 4901 4893 112 7054
Output 4901 4893 112 106 106
%
97.71 98.5
Table 4- LLDOS2.0 worse case result
FVM PM DSDM FAPM SUM
Input 5014 3818 3812 1909 5014
Output 3818 3812 1909 1915 1915
%
49.92 61.8
Table 5- LLDOS2.0 best case result after updating
the rules
FVM PM DSDM FAP
M
SUM
In 5014 3818 3812 153 5014
Out 3818 3812 153 149 149
%
95.98 97.02
As one of our immediate future work we
intend to experiment this model with
Capture the Flag 2010 dataset [24].
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 130-139
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
137
5.0 Conclusion
In this paper we proposed a model to fuse
the generated alerts by the IDSs in a
computer network. Inspired by the human
defence system, this model utilizes one of
the most important theories in Artificial
Immune System (AIS), danger theory, and
attempts to aggregate alerts based on a
general set of predefined rules, and also
reduce the false alarms. In contrast with
existing rule based alert correlation models
which are limited to their set of predefined
rules, this model does not have any
limitation in terms of alert aggregation and
this is because the predefined rules in this
model are very general. After
experimenting this model in a real network
environment and also using existing
datasets in literature, the proposed model
managed to aggregate alerts with an
average rate of 97.5 percent.
References
[1] A. Valdes and K Skinner, Probabilistic Alert
Correlation, In Proceedings of the 4th
International Symposium on Recent Advances
in Intrusion Detection, 2001, pp.54-68.
[2] O. M. Dain and R. K. Cunningham. Fusing a
heterogeneous alert stream into scenarios. In
Proceedings of the 2001 ACM Workshop on
Data Mining for Security Applications, pages
113, 2001.
[3] P. Ning, Y. Cui, and D. S. Reeves.
Constructing attack scenarios through
correlation of intrusion alerts. In Proceedings
of the 9th ACM Conference on Computer and
Communications Security, pages 245254,
2002.
[4] K.Julisch, Using root cause analysis to handle
intrusion detection alarms, PhD Thesis,
University of Dortmund, Germany, 2003.
[5] S. Cheung, U. Lindqvist and M. W. Fong,
Modelling multistep cyber attacks for scenario
recognition, In Proceeding of Third DARPA
Information Survivability Conference and
Exposition (DISCEX III), Washington,D.C.,
April 2003.
[6] F. Valeur, G. Vigna, C. Kruegel and R. A.
Kemmerer, A comprehensive approach to
intrusion detection alert correlation, In
Proceeding of IEEE Trans. Dependable Secure
Computing., vol. 1, no. 3, pp. 146169, Jul.Sep.
2004.
[7] X. Qin and W. Lee, Discovering novel attack
strategies from INFOSEC alerts, In Proceeding
of 9th European Symposium on Research in
Computer Security (ESORICS 2004), pp. 439-
456, 2004.
[8] X. Qin and W. Lee, Attack plan recognition
and prediction using causal networks, In
Proceeding of 20th Annual Computer Security
Applications Conference 2004.
[9] S. King, M. Mao, D. Lucchetti, and P. Chen,
Enriching intrusionalerts through multi-host
causality, In proceeding of the Network and
Distributed Systems Security Symposium., San
Diego, CA, 2005.
[10] B. Zhu and A. A. Ghorbani, Alert correlation
for extracting attack strategies, International
Journal of Network Security, Vol.3, No.3,
pp.244-258, November 2006.
[11] L. Wang, Z. T. Li and Q. H. Wang, A novel
technique of recognizing multi-stage attack
behaviour, In Proceeding of IEEE International
Workshop on Networking, Architecture and
Storages, pp. 188, 2006.
[12] Keunsoo Lee, Juhyun Kim, Ki Hoon Kwon,
Younggoo Han and Sehun Kim, DDoS attack
detection method using cluster analysis,
Expert Systems with Applications, vol.34,no.3,
2007, pp.1659-1665 .
[13] D. Fava, S. R. Byers, S. J. Yang, Projecting
Cyber Attacks through Variable Length
Markov Models, IEEE Transactions on
Information Forensics and Security, Vol.3,
Issue 3, September 2008.
[14] S. H. Zhang, Y. D. Wang and J. H. Han,
Approach to forecasting multistep attack based
on HMM, Computer Engineering, Vol.34,
No.6, pp. 131-133, Mar 2008.
[15] H. Du, D. Liu, J. Holsopple, and S. J. Yang,
Toward Ensemble Characterization and
Projection of Multistage Cyber attacks, In
Proceeding of IEEE ICCCN10, Zurich,
Switzerland, August 2-5, 2010.
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 130-139
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
138
[16] Duan Shan-Rong, LiXin The anomaly
intrusion detection based on immune negative
selection algorithm Granular Computing,
2009, GRC '09.In Proceeding of IEEE
International Conference, 978-1-4244-4830-2,
2009.
[17] Matzinger P, Tolerance Danger and the
Extended Family, Annual reviews of
Immunology 12, 1994.
[18] U.Aickelin, P.Bentley, S.Cayzer, J.Kim,J.
McLeod Danger Theory: The Link between
AIS and IDS second International Conference
on Artificial Immune Systems, Edinburgh,
U.K. September, 2003.
[19] Matzinger P, The Danger Model: A Renewed
Sense of Self, Science 296: 2002.
[20] Gallucci S, Matzinger P, Danger signals: SOS
to the immune system, Current Opinions in
Immunology 13, pp 114-119. 2001
[21] M. Mahboubian, N. A. W. A Hamid A
Machine Learning based AIS IDS In
Proceeding of GCSE 2011 Dubai.
[22] G.Xiang, X.Dong, G.Yu Gorrelating Alerts
with a data mining based approach In
Proceedings of the 2005 IEEE International
Conference on e-Technology, e-Commerce
and e-Service.
[23] B.Cheng, G.Liao, C.Huang A novel
probabilistic matching algorithm for multi
stage attack forecasts IEEE Journal on
Selected Areas in Communications, Vol. 29,
No. 7, August 2011.
[24] Capture the flag traffic dump,
http://www.defcon.org/html/links/dc-ctf.html.
[25] Reza Sadoddin, Ali A. Ghorbani, An
incremental frequent structure mining
framework for real-time alert correlation,
Computers & Security, Volume 28, Issues 34,
MayJune 2009, pp. 153-173, ISSN 0167-
4048, 10.1016/j.cose.2008.11.010.
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 130-139
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
139

Trust Measurements Yeld Distributed Decision Support in Cloud
Computing
1
Edna Dias Canedo,
1
Rafael Timteo de Sousa Junior,
2
Rhandy Rafhael de Carvalho and
1
Robson de
Oliveira Albuquerque
1
Electrical Engineering Department, University of Braslia UNB Campus Darcy
Ribeiro Asa Norte Braslia DF, Brazil, 70910-900.
2
Informatics Institute INF, University Federal of Gois UFG - Campus Samambaia
Bloco IMF I Goinia GO, Brazil, 74001-970
ednacanedo@unb.br, desousa@unb.br, rhamoy@gmail.com, robson@redes.unb.br
ABSTRACT

This paper proposes the creation of a trust
model to ensure the reliable files exchange
between the users of a private cloud. To
validate the proposed model, a simulation
environment with the tool CloudSim was
used. Its use to run the simulations of the
adopted scenarios allowed us to calculate the
nodes (virtual machines) trust table and
select those considered more reliable;
identify that the metrics adopted by us
directly influenced the measurement of trust
in a node and verify that the trust model
proposed effectively allows the selection of
the most suitable machine to perform the
exchange of files.

KEYWORDS

Distributed system; cloud computing;
availability; exchange of files and model
trust.

1 INTRODUCTION

The development of virtualization
technologies allows the sale on-demand,
in a scalable form, of resource and
computing infrastructure, which are able
to sustain web applications. So it borns
cloud computing, generating a
increasing tendency for applications that
can be accessed efficiently, independent
from their location. This technology
arrival creates the necessity to rethink
how applications are developed and
made available to users, at the same time
that motivates the development of
technologies that can support its
enhancement.
Since IBM Corporation announced its
program for cloud computing at the end
of 2007, other major technology
companies (IT) has adopted clouds
progressively, for example, Google App
Engine, which lets you create and host
applications web with the same systems
that power Google applications, Amazon
Web Services (AWS) from Amazon,
which was one of the first companies
providing cloud services to the public,
Elastic Compute Cloud (EC2) from
Amazon, which allows users to rent a
virtual machines that they can run their
own applications providing a complete
control over their computational
resources and allowing the execution in
the computing environment, Simple
Storage Service (S3) of Amazon, which
allows the storage of files in the storage
service, and Apple iCloud Azure
Services Platform from Microsoft, which
introduced Cloud computing products
[1]. However, the Cloud computing also
presents risks related to data security in
its different aspects, such as
confidentiality, integrity and authenticity
[2-3, 4].
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 140-151
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
140

This paper proposes a trust model to
exchange of files between peers in a
private cloud. Private cloud
computing environment allows it to
be working with a specific context of file
distribution, so the files have a
desired distribution and availability,
being possible guarantees from the cloud
manager that the access is restricted, and
the identification of nodes is unique and
controlled.
In the proposed model, the choice of the
more reliable node is performed taking
into account its availability. The
selection of nodes and its evaluation of
trust value will determine whether the
node is reliable or not, which will be
performed according to the storage
system, operational system, processing
capacity and node link. Trust is
established based on requests and
consultations held between nodes of the
private cloud.
This paper is organized as follows. In
Section II, we present an overview of the
concepts of trust and reputation. In
Section III, we present review some
related work about security, file system
and trust in the cloud. In section IV, we
introduce the proposed trust model and
practical results. Finally, in Section VI,
we conclude with a summary of our
results and directions for new research.

2 TRUST

The concepts of trust, trust models and
trust management has been the object of
several recent research projects. Trust is
recognized as an important aspect for
decision-making in distributed and auto-
organized applications [5-6]. In spite of
that, there is no consensus in the
literature on the definition of trust and
what trust management encompasses. In
the computer science literature, Marsh
[5] is among the first to study
computational trust. Marsh [5] provided
a clarification of trust concepts,
presented an implementable formalism
for trust, and applied a trust model to a
distributed artificial intelligence (DAI)
system in order to enable agents to make
trust-based decisions.
The main definitions of trust, focused on
the human aspect are based on
relationships between individuals,
demonstrating clearly the relationship
between trust and the security feeling [7-
8]. Thus, trust in the human aspect is
related to the feeling of security focused
on a particular context, to satisfy an
expectation of a solution that is likely to
be solved [7-8].
The process of trusting in an individual
is the result of numerous analyzes that
together generates the definition of trust.
Trust (or, symmetrically, distrust) is a
particular level of subjective probability,
which an agent believes that another
agent or group of agents will perform a
particular action, which can go through a
monitoration (or independent of its
ability to monitor it) and in a context
which it affects his own action [8].
Trust is still defined in [8] as the most
important social concept that assist
humans to cooperate in their social
environment and its present in all human
interactions. In general, without trust (in
other humans, agents, organizations,
etc.) there is no cooperation and
therefore there is no society. In an
analogous situation, trust can be treated
as a probability of an agent behavior to
perform a given action expected by
another agent.
An agent can check the execution of a
requested action (if its capacity allows
it), inside a context that the achievement
of the expected action will affect the
action itself of this agent (involving a
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 140-151
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
141

decision). So if someone is trustworthy,
it means that there is a high enough
probability that this person will perform
an action considered beneficial some
way, to its cooperation be considered. In
an opposite situation, it simply believes
that the probability is low enough to its
cooperation be avoided.
Gambetta [8] proposes that trust would
have a relation with the cooperation,
making cooperation important for the
acquisition of trust. If trust is unilateral,
cooperation can't succeed. For example,
if there is only mistrust between two
agents, then there's not cooperation
between them at all, so they cannot
perform an operation together to solve a
problem. So similarly, if there is a high
level of trust, probably there is a high
cooperation among agents to solve a
particular problem.
Josang et al [9] define trust as the
subjective probability which an
individual, A, expects that another
individual, B, perform a given action
which its welfare depends on. This
definition includes the concept of
dependence and reliability (probability)
of the trusted party, as seen by the
relying party.
Using the trust, there is the prospect that
an entity P request information from one
entity Q to an entity R. Imagine that
entity P need some information about an
entity that she still didn't correlate (S
entity). P can ask for entities that it has a
relationship, if one of them knows the
entity S, and what their opinion about it
(experiences / relationships already
performed with the entity S), providing
an idea of the reputation of the entity S
in relation to the queried entity.
In a scenario that an entity knows
several other entities, but there is an
entity that doesn't know a specific entity
(R doesn't know the entity Z), it can send
a question about that unknown entity to
its related entities and wait their
answers. If one of the entities knows the
investigated entity, it will return the
response to the requesting entity
reporting its opinion about the unknown
entity.
Figure 1 presents the trust relation. From
the reviews about the behavior of an
entity, it can be performed the
calculation of trust, based on a model,
and from the obtained result, a
relationship decision is made, what
determines if an entity will or not relate
to another entity, in a given context.

Figure 1 - Trust Relation
2.1 Reputation

Reputation can be defined in a scenario
where there's not enough information to
make the inference that an entity is or
not reliable [10], and to achieve this
inference value, an entity ask the opinion
of other entities. From the obtained
information of the questioned entities,
the requesting entity performs the
calculation of reputation from its own
information, which is based on its values
of trust and obtained information from
third parties (the degree of trust in them).
With the necessary information, the
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 140-151
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
142

entity assesses the context of the
situation itself, being able to reach a
value of reputation. The reputation
calculation is obtained by analyzing the
behavior of an entity over time.
The reputation in the computing
scenario, according to the work reviews
related to trust, indicates that it may have
a strong influence on the calculation of
trust [10] and [8], allowing trust to be
interconnected with a reputation in
generation of trust values and these
values, be subject not only for the
perception of the behavior of an entity,
but also self-evaluation by those
interested in some kind of iteration in a
given context.

3 SECURITY IN THE CLOUD

Privacy and security have been shown to
be two important obstacles concerning
the general adoption of the cloud
computing paradigm. In order to solve
these problems in the IaaS service layer,
a model of trustworthy cloud computing
which provides a closed execution
environment for the confidential
execution of virtual machines was
proposed [11]. The proposed model,
called Trusted Cloud Computing
Platform (TCCP), is supposed to provide
higher levels of reliability, availability
and security. In this solution, there is a
cluster node that acts as a Trusted
Coordinator (TC). Other nodes in the
cluster must register with the TC in
order to certify and authenticate its key
and measurement list. The TC keeps a
list of trusted nodes. When a virtual
machine is started or a migration takes
place, the TC verifies whether the node
is trustworthy so that the user of the
virtual machine may be sure that the
platform remains trustworthy. A key and
a signature are used for identifying the
node. In the TCCP model, the private
certification authority is involved in each
transaction together with the TC [11].
Shen et al. [12] presented a method for
building a trustworthy cloud computing
environment by integrating a Trusted
Computing Platform (TCP) to the cloud
computing system. The TCP is used to
provide authentication, confidentiality
and integrity [12]. This scheme
displayed positive results for
authentication, rule-based access and
data protection in the cloud computing
environment.
Zhimin et al. [13] propose a
collaborative trust model for firewalls in
cloud computing. The model has three
advantages: a) it uses different security
policies for different domains; b) it
considers the transaction contexts,
historic data of entities and their
influence in the dynamic measurement
of the trust value; and c) the trust model
is compatible with the firewall and does
not break its local control policies.
A model of domain trust is employed.
Trust is measured by a trust value that
depends on the entitys context and
historical behavior, and is not fixed. The
cloud is divided in a number of
autonomous domains and the trust
relations among the nodes are divided in
intra and inter-domain trust relations.
The intra-domain trust relations are
based on transactions operated inside the
domain. Each node keeps two tables: a
direct trust table and a recommendation
list. If a node needs to calculate the trust
value of another node, it first checks the
direct trust table and uses that value if
the value corresponding to the desired
node is already available. Otherwise, if
this value is not locally available, the
requesting node checks the
recommendation list in order to
determine a node that has a direct trust
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 140-151
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
143

table that includes the desired node.
Then it checks the direct trust table of
the recommended node for the trust
value of the desired node.
The inter-domain trust values are
calculated based on the transactions
among the inter-domain nodes. The
inter-domain trust value is a global value
of the nodes direct trust values and the
recommended trust value from other
domains. Two tables are maintained in
the Trust Agents deployed in each
domain: form of Inter-domain trust
relationships and the weight value table
of this domain node.
In [14] a trusted cloud computing
platform (TCCP) which enables IaaS
providers to offer a closed box execution
environment that guarantees confidential
execution of guest virtual machines
(VMs) is proposed. This system allows a
customer to verify whether its
computation will run securely, before
requesting the service to launch a VM.
TCCP assumes that there is a trusted
coordinator hosted in a trustworthy
external entity. The TCCP guarantees
the confidentiality and the integrity of a
users VM, and allows a user to
determine up front whether or not the
IaaS enforces these properties.
The work [15] evaluates a number of
trust models for distributed cloud
systems and P2P networks. It also
proposes a trustworthy cloud
architecture (including trust delegation
and reputation systems for cloud
resource sites and datacenters) with
guaranteed resources including datasets
for on-demand services.

4 TRUST MODEL FOR FILE
EXCHANGE IN PRIVATE CLOUD

According to the review and related
research [3-11, 13-16], it is necessary to
employ a cloud computing trust model to
ensure the exchange of files among
cloud users in a trustworthy manner. In
this section, we introduce a trust model
to establish a ranking of trustworthy
nodes and enable the secure sharing of
files among peers in a private cloud.
The environment computing private
cloud was chosen because we work with
a specific context of distributing files,
where the files have a desired
distribution and availability.
We propose a trust model where the
selection and trust value evaluation that
determines whether a node is
trustworthy can be performed based on
node storage space, operating system,
link and processing capacity. For
example, if a given client has access to a
storage space in a private cloud, it still
has no selection criterion to determine to
which cloud node it will send a
particular file. When a node wants to
share files with other users, it will select
trusted nodes to store this file through
the proposed following metrics:
processing capacity (the average
workload processed by the node, for
example, if the nodes processing
capacity is 100% utilized, it will take
longer to attend any demands), operating
system (operating system that has a
history of lower vulnerability will be less
susceptible to crashes), storage capacity
and link (better communication links and
storage resources imply greater trust
values, since they increase the nodes
capacity of transmitting and receiving
information).
The trust value is established based on
queries sent to nodes in the cloud,
considering the metrics previously
described.
Each node maintains two trust tables:
direct trust table and the recommended
list:
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 140-151
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
144

a) If a node needs to calculate the trust
value of another node, it first checks the
direct trust table and uses the trust value
if the value for the node exists. If this
value is not available yet, then the
recommended lists are checked to find a
node that has a direct trust relationship
with the desired node the direct trust
value from this nodes direct trust table
is used. If theres no value attached, then
it sends a query to its peers requesting
information on their storage space,
processing capacity and link.
The trust values are calculated based on
queries exchanged between nodes.
b) The requesting node will assign a
greater trust value to nodes having
greater storage capacity and / or
processing and better link. In addition,
the operating system will also be
considered as a criterion of trust.
In this model is assumed that the node
has a unique identity on the network. As
trust is evolutionary, when a node joins
the network, the requesting node doesnt
know, soon it will be asked about his
reputation to other network nodes. If no
node has information about respective
node (it has not had any experience with
it), the requesting node will decide
whether the requested relate to, initially
asking some activity / demand for it to
run. From its answers will be built trust
with its node. Trust table node will
contain a timer (saving behavior / events
that raise and lower the trust of a given
node) and will be updated at certain
times.
Figure 2 presents a high level view the
proposed trust model, where the nodes
query their peers to obtain the
information needed to build their local
trust table.
In this model, a trust rank is established,
allowing a node A to determine whether
it is possible to trust a node B to perform
storage operations in a private cloud. In
order to determine the trust value of B,
node A first has to obtain basic
information about this node.
When node A needs to exchange a file in
cloud and it wants to know if node B is
trusted to send and store the file, it will
use the proposed Protocol Trust Model,
which can be described with the
following scenario:
Step 1, node A sends a request to the
nodes of cloud, including node B, asking
about storage capacity, operating system,
processing capacity and link.

Figure 2 - High Level Trust Model
In step 2, nodes, including node B, send
a response providing the requested
information.
In step 3, node A evaluates the
information received from B and from
all nodes. If the information provided by
B, are consistent with the expected, with
the average value of the information of
other nodes, the values are stored in
local recommendations table of node A,
after to make the calculation of trust and
store in your local trust table.
The trust value of a node indicates its
disposition/suitability to perform the
operations between peers of cloud. This
value is calculated based on the history
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 140-151
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
145

interactions/queries between the nodes,
value ranging between [0, 1].
In general, trust of node A in node B, in
the context of a private cloud NP, can be
represented by a value V which
measures the expectation that a
particular node will have good behavior
in the private cloud, so trust can be
expressed by:
) , (
) , (
b a
np
np
b a
V T (1)
np
b a
T
) , (
Represent the trust of A in B in the
private cloud NP and
) , ( b a
np
V represent the
trust value of B, in the private cloud NP
analyzed by A. According to definition
of trust,
) , ( b a
np
V is equivalent to queries
sent and received (interaction) by A
related to B in cloud NP. As the
interactions are made between the nodes
of private cloud, the information is used
for the calculation of trust.
Nodes of a private cloud should be able
to consider whether a trust value is
acceptable, generating trust level. If the
node exceeds the level within a set of
analyzed values, it must be able to judge
the node in a certain degree of trust.
Trust degree can vary according to a
quantitative evaluation: a node has a
very high trust in another one, a node
has low trust in another one, a node
doesnt have sufficient criteria to opine,
a node trusts enough to opine, etc. In our
model, one node trusts another node
from trust value T 0.6 [5].
The trust values are calculated from
queries between the nodes of NP,
allowing obtaining the necessary
information for final calculation of trust.
The trust information is stored through
the individual records of interaction with
the respective node, staying in local
database information about the behavior
of each node in the cloud that wants to
exchange a file (local trust table and
local recommendations table).
Four aspects can to have impact on
calculation of direct trust of a node.
Greater storage capacity and processing
capacity have more weight in the choice
of a node more reliable, because of these
features are the responsible for ensure
the integrity and file storage.
To calculate direct trust of a node, it is
attributed by administrator of the private
cloud: storage capacity and processing
with weights of 35%, 15% to link and
the remaining 15% to operating system.
Knowing that a node can to have the
trust value ranging from [0.1] and that
these values are variable over time, a
node can have its storage capacity
increased or decreased, its necessary
that trust reflects the behavior of a node
in a given period of time. Nodes with
constant characteristics should therefore
be more reliable because they have less
variation in basic characteristics.
According to the weights attributed its
possible to calculate the trust of node.
The calculation of trust node A in B in
cloud NP will be represented by:
(2)
j
m b m b m b m b
j
np
b
np
fnp
b a V T
1 )) 15 , 0 * ) , (( ) 15 , 0 * ) , (( ) 35 , 0 * ) , (( ) 35 , 0 * ) , (((
4 3 2 1
1
) , (


fnp
b a
T
) , (
Represents the final trust of A in B
in cloud NP. The trust value of B is
defined as the sum of metrics values that
the node B has (m) in the cloud NP; j
represents the number of interactions of
trust from node A in B in the cloud NP,
where j 0.

4.1 Description of the Simulated
Environment

In order to demonstrate the proposed
objectives, it's necessary to define a
simulation environment capable to
measure / validate the metrics used,
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 140-151
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
146

expecting to achieve results according to
the parameters and criteria of reliable
information used in this work.
Furthermore, the simulation environment
acts as basis for further discussion, as
well as the evolution of this proposal
through new cloud computing
environments.
Through the implementation of the
simulation environment, it's possible to
discuss and analyze the required
parameters for a trust model in a private
cloud, evaluate the generation of the
local trust table of the nodes, as well as
the effectiveness of the adopted metrics,
and finally generate results that serve to
discuss the problem of reliable exchange
of files among peers in a private cloud.
The CloudSim simulation environment
reproduces the interaction between a
Infrastructure provider as Service (IaaS)
and their customers [17].
The scenarios of the simulations of this
work through CloudSim framework
comprising a IaaS provider, which has
three datacenters and a client that afford
this service.
The client uses the resources offered by
the provider for sending and allocation
of virtual machines that perform a set of
tasks, called cloudlets.
The dynamic data center of choice for
sending and allocation of virtual
machines and execution of cloudlets is
defined by the utilization profile of the
client and the resources offered by the
provider. Thus, the scenario simulated in
this work consists of a IaaS provider that
has three datacenters distributed in
different locations, Goiania, GO,
Anapolis - GO and Braslia-DF, a
customer with a usage profile, 04 hosts,
30 VMs and 100 cloudlets.




4.1.1 Results and analysis

When the simulation environment of
CloudSim is defined and configured, and
once the weights of the metrics are
assigned, it can be performed the
calculation of the trust of a node running
the scenarios implemented in the
framework.
To perform the simulation of the
proposed environment it's initially
necessary to define the settings that are
considered ideal for a machine that is
reliable, and then define the baseline
machine configuration in order to
compare with the values of other virtual
machines of the simulation environment.
As in the context of this application
tasks are small and low complexity, the
baseline configuration used is the one
defined by the Amazon standard [18],
trying to get closer as possible to the
existing cost benefit in real clouds,
where the settings of the machines are
compatible with the charges and services
offered.
The configuration used in this work is
shown in Table 1.
Table 1. Configuration of the Baseline Machine
[17]
Values Ideal
HD Size 163840 MB
Memory RAM Size 1740 MB
MIPS Size 5000
Bandwidth Size 1024 Kbytes

In order to make comparisons and
analysis of the results in various
scenarios, several simulations were
performed during the proposed work.
The trust of a virtual machine in the
simulated model increases in proportion
as human being, example, when an
individual performs an activity or solve a
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 140-151
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
147

particular problem for us successfully,
our trust is increased gradually. Thus,
each cloudlet successfully executed, the
trust value of a VM will be increased by
2.5%, until the trust level arrives at 0.85.
Above 0.85, reliability increases 5%
until it reaches the maximum trust of
1.0.
If a machine doesn't perform a certain
task successfully, it doesn't solve its
problem, it loses trust. The weight of
suspicion is usually greater than the
weight of trust. Thus, in our simulated
model the rate of suspicion is 5% for
each task performed without success.
In the attempt to simulate an
environment closer to the reality, was
conducted a simulation scenario which
the cloudlets are not fully executed,
allowing virtual machines to change
their behavior over time, reflecting a fact
more similar to a real environment of a
private cloud computing. It was defined
that an unsuccessful task is chosen
randomly and that will occur when the
random number is higher than 0.8, it
means that the possibility of a
successfully task in this scenario would
be 80%. Thus, the simulation scenario
can be changed, as desired.
Analyzing the results of the simulations,
it's possible to identify the trust level of
the virtual machines that performed the
cloudlets. According to the reference
information, a node trusts another from
the value of trust 6 . 0 T .
In the simulation of the proposed
scenario, some machines didn't perform
cloudlets because they didn't fulfill the
checking conditions of a reliable
machine to perform a task, compared to
the baseline machine.
The Table 2 presents the virtual
machines that performed cloudlets. The
other virtual machines did not perform
any cloudlet do not satisfy the trust level
desirable.
The simulation result is shown in Figure
3.
Table 2. Cloudlets/Tasks Performed Virtual
Machines with Success and without Success.
Virtual
Machines
Tasks
performed
successfully
Tasks
performed
unsuccessfully
Total
VM 03 00 01 01
VM 04 12 02 14
VM 05 08 02 10
VM 06 09 06 15
VM 07 01 02 03
VM 08 08 02 10
VM 13 03 01 04
VM 14 00 01 01
VM 15 07 01 08
VM 16 00 01 01
VM 24 00 01 01
VM 25 12 02 14
VM 26 13 01 14
VM 27 01 02 03
VM 28 00 01 01

The Figure 4 presents trust level of the
virtual machine 09 that didnt perform
any cloudlet during simulation, so there
is no variation in the graph. All
machines not performed any task have
graph similar.
The Figure 5 presents the trust threshold
of virtual machine 15 after changing its
processing capacity (HD and RAM).
During the simulation VM 15 performed
07 tasks/cloudlets successfully and 01
unsuccessfully. The variation trust level
of the VM 15 was calculated in
accordance to the successfully and
unsuccessfully interactions. Every
interaction successfully performed the
trust value is increased by 2.5% and for
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 140-151
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
148

each interaction performed without
success, the value is decremented by 5%
of the threshold, as established weight.
Evaluating the results obtained with the
change of both parameters of the VM 15
configuration, it's also possible to
identify that all the simulated scenario
has changed, impacting not only on the
modified machine, but in the other
virtual machines too. Moreover, the
number of tasks/cloudlets executed with
the change of the two scenarios was very
close to the result obtained with the
change made in storage capacity. With
the results is possible to identify that the
processing capacity has greater impact in
the simulation results.

Figure 3 - Trust Virtual Machines after Task
Execution.

Figure 4 - Trust Virtual Machines 09 after 0
Task Execution.
The initial value trust threshold of the
virtual machine 15 was
0.5935552586206897 and the final value
0.7442351812748581, as presents in
Table 3.

Figure 5 - Trust Virtual Machines 15 after 8
Task Execution.
Table 3. Erro! Nenhum texto com o estilo
especificado foi encontrado no
documento.Trust virtual machine 15 Running 07
Cloudlets with Success and 01 without Success.
Task
Number
Trust threshold Virtual Machine
15 every Interaction
32 0.5935552586206897
42 0.6185552586206897
50 0.6402076105818058
54 0.6864683466109688
67 0.6514683466109688
75 0.6602111892581934
86 0.7098601812748581
95 0.7442351812748581

5 CONCLUSIONS

Cloud computing has been the focus of
research in several recent studies, which
demonstrate the importance and
necessity of a trust model to ensure
reliable and secure exchange of files. It
is a promising area to be explored
through research and experimental
analyzes, using a computational trust to
mitigate existing problems in aspects
related to security, trust and reputation,
to guarantee the integrity of exchange of
information in private cloud
environments, reducing the possibility of
failure or alteration of information in the
exchange of files, involving metrics that
are able to represent or map the trust
level of a network node in order to make
the exchange of files in a private cloud.
The proposal discussed in this paper, to
develop a new trust model for trusted
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 140-151
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
149

exchange of files, in an environment of
private cloud computing, using the
concepts of trust and reputation, seems
to be promising, due to the identification
of problems and vulnerabilities related to
security, privacy and trust that a cloud
computing environment presents.
Simulations and results allow to identify
which adopted metrics directly influence
the calculation of the trust in a node. The
future simulations using a real
environment will allow to evaluate the
behavior of nodes in an environment of
private cloud computing as well as
historical of its iterations and assumed
values throughout the execution of the
machines.
The use of open platform, CloudSim
[17], to execute the simulations of the
adopted scenarios allowed to calculate
the table trust of a node (virtual
machines) and select those considered
more reliable. Furthermore, the
adequacy of the used metrics were
evaluated in the proposed trust model,
allowing to identify and select the most
appropriate in relation to the historical
behavior of the nodes belonging to the
analyzed environment.

6 REFERENCES

1. Zhang Jian-jun and Xue Jing.A Brief
Survey on the Security odel of Cloud
Computing, 2010 Ninth International
Symposium on Distributed Computing and
Applications to Business, Engineering and
Science (DCABES), Hong Kong IEEE, pp.
475 478, 2010.
2. Wang Han-zhang and Huang Liu-sheng.An
improved trusted cloud computing platform
model based on DAA and Privacy CA
scheme, IEEE International Conference on
Computer Application and System Modeling
(ICCASM 2010). 978-1-4244-7235-2, 2010.
3. Uppoor, S., M. Flouris, and A. Bilas.
Cloud-based synchronization of distributed
file system hierarchies, Cluster Computing
Workshops and Posters (CLUSTER
WORKSHOPS), IEEE International
Conference, pp. 1-4. 2010.
4. Popovic, K. and Z. Hocenski. Cloud
computing security issues and challenges,
MIPRO, 2010 Proceedings of the 33rd
International Convention, pp. 344-349, 24-
28 May 2010.
5. Stephen Paul Marsh, Formalising Trust as a
Computational Concept, Ph.D. Thesis,
University of Stirling, 1994.
6. Thomas Beth, M. Borcherding, and B.
Klein, Valuation of trust in open
networks, In ESORICS 94. Brighton, UK,
November 1994.
7. Lamsal Pradip. (2006). Understanding
Trust and Security. Department of
Computer Science University of Helsiki,
Finland, October 2001. Acessado em
13/02/2006. Disponvel em:
http://www.cs.helsinki.fi/u/lamsal/asgn/trust
/UnderstandingTrustAndSecurity.pdf
8. Gambetta Diego. (2000). Can We Trust
Trust?, in Gambetta, Diego (ed.) Trust:
Making and Breaking Cooperative
Relations, electronic edition, Department of
Sociology, University of Oxford, chapter 13,
213-237.
9. Josang Audun, Roslan Ismail, Colin Boyd.
(2007). A Survey of Trust and Reputation
Systems for Online Service Provision.
Decision Support Systems. Volume 43 Issue
2, March. Elsevier Science Publishers B. V.
Amsterdam, The Netherlands, The
Netherlands.
10. Patel, Jigar. A Trust and Reputation Model
for Agent-Based Virtual Organizations.
Thesis of Doctor of Philosophy. Faculty of
Engineering and Applied Science. School of
Electronics and Computer Science.
University of Southampton. January. 2007.
11. Xiao-Yong Li, Li-Tao Zhou, Yong Shi, and
Yu Guo, A Trusted Computing
Environment Model in Cloud Architecture,
Proceedings of the Ninth International
Conference on Machine Learning and
Cybernetics, 978-1-4244-6526-2. Qingdao,
pp. 11-14. China. July 2010.
12. Zhidong Shen, Li Li, Fei Yan, and Xiaoping
Wu, Cloud Computing System Based on
Trusted Computing Platform, Intelligent
Computation Technology and Automation
(ICICTA), IEEE International Conference
on Volume: 1, pp. 942-945. China. 2010.
13. Zhimin Yang, Lixiang Qiao, Chang Liu, Chi
Yang, and Guangming Wan, A
collaborative trust model of firewall-through
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 140-151
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
150

based on Cloud Computing, Proceedings of
the 2010 14th International Conference on
Computer Supported Cooperative Work in
Design. Shanghai, China. pp. 329-334, 14-
16. 2010.
14. Santos Nuno, K. Gummadi, and R.
Rodrigues, Towards Trusted Cloud
Computing, Proc. HotCloud. June 2009.
15. Chang. E, T. Dillon and Chen Wu, Cloud
Computing: Issues and Challenges, 24th
IEEE International Conference on Advanced
Information Networking and Applications
(AINA), pp. 27-33. Australia, 2010.
16. Kai Hwang, Sameer Kulkareni, and Yue Hu,
Cloud Security with Virtualized Defense
and Reputation-Based Trust Mangement,
2009 Eighth IEEE International Conference
on Dependable, Autonomic and Secure
Computing (DASC 09), pp. 717-722, 2009.
17. Calheiros, Rodrigo, N.; Rajiv Ranjan; Anton
Beloglazov; De Rose, Cesar, A. F.; Buyya,
Rajkumar. (2011). CloudSim: A Toolkit for
Modeling and Simulation of Cloud
Computing Environments and Evaluation of
Resource Provisioning Algorithms,
Software: Practice and Experience (SPE),
Volume 41, Number 1, 23-50, ISSN: 0038-
0644, Wiley Press, New York, USA,
January.
18. Amazon (2012). Amazon Web Services.
Accessed in 01/06/2012. Available:
http://aws.amazon.com/pt/ec2/instance-
types/.

International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 140-151
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
151

Mark Evered
School of Science and Technology
University of New England
Armidale, Australia
mevered@une.edu.au


Abstract The access specification language RASP extends
traditional role-based access control (RBAC) concepts to provide
greater expressive power often required for fine-grained access
control in sensitive information systems. Existing formal models
of RBAC are not sufficient to describe these extensions.
In this paper, we define a new model for RBAC which formalizes
the RASP concepts of controlled role appointment and
transitions, object attributes analogous to subject roles and a
transitive role/attribute derivation relationship.
Keywords: security, access control, model, role, attribute
I. INTRODUCTION
In general, each of the users of an information system needs
to be able to view or manipulate only some of the information
stored in the system. Ideally, the appropriate access for each
user will be specified in the form of an access policy during the
analysis phase of the software development and then enforced
via access control mechanisms during the execution of the
implemented system. As the use of information systems for
sensitive data continues to grow in areas such as e-health, it is
becoming increasingly important, both for security and for
privacy reasons, that the specification of the access control is
precise and clear enough to express and satisfy strict minimal
(need-to-know) policy requirements. This ensures both that
valid users of a system will not misuse their access and that
intruders who have illegitimately managed to assume the
identity of a valid user will be restricted in what they can do
within the system. Both of these factors are vital for the
strengthening of cyber-security.
An access control policy can be understood as consisting of
two components. The first is control over the membership of
the subject groups of interest in the application domain. The
second is a mapping from each of these groups to permissions
which allow certain operations to be performed on the data by
members of the groups. These operations may just be read
and write as in traditional database systems or may be based
on the methods of object classes as first suggested in [8].
Both components of access control have been approached
in a number of different ways. In the simplest case, an access
control list (ACL) for each object contains an entry for each
subject or group of subjects. The owner of the object (or a
system administrator) can assign subjects to groups. More
recently, Role Based Access Control (RBAC) models have
been defined which allow the first component of access control
to be based on the roles played by individuals in the
organisations making use of an information system. This
means that there is a (dynamic) mapping from subjects to roles
and then a (relatively static) mapping from roles to
permissions. These models recognise the complex nature of
permissions in real organisations and have been shown to
subsume both conventional discretionary access control models
and mandatory access control models such as Bell-LaPadula
[1].
Formally, given:
a set of subjects
a set of roles
a set of objects and
a set of operations (methods) on objects
we can define an RBAC system as consisting of the pair:
(H, X)
where
H is a set of role assignments and
X is a set of permissions.
A pair (s, r) H specifies that the subject s has the role r
while a triple (r, o, m) X specifies that a subject with the role
r can access the object o via the method m.
While RBAC is an improvement over an ACL approach,
case studies such as [2][6][18] have demonstrated that the
access control requirements of real-world information systems
are considerably more complex than the simple role-based
approach described above can handle. For this reason, a
number of different RBAC models have been proposed with
varying degrees of additional expressive power. These
additions include role hierarchies [15], parameterized roles [7]
and control over role acquisition [17].
One very useful extension, as implemented in access
control systems such as [9], is to allow objects to be labeled
with attributes in much the same way that subjects acquire
roles. Formally, we introduce the additional set:
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 152-159
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
152
A Formal Semantic Model for the Access
Specification Language RASP
a set of attributes with which objects can be labeled.
The permissions in such a system then give access to an
object on the basis of it having a particular attribute or set of
attributes rather than to objects directly. This is line with the
argument in [3] that objects and environments need roles just
as subjects do.
The author has defined an access control specification
language called RASP (Role and Attribute-based Specification
of Protection ) [5] which is based on both roles of subjects and
attributes of objects and which gives fine-grained control over
initial role and attribute acquisition as well as subsequent
transitions. In this paper we give a formal definition for an
access control model which supports the RASP extensions to
RBAC. In particular, it supports:
Subject roles
Object attributes
Control over appointment to roles
Control over labeling of objects with attributes
Control over dynamic acquisition of further roles
and attributes
The model uses a transitive approach which supports role
hierarchies, appointment based on external certificates and role
and attribute revocation. No existing RBAC model has the
expressive power to support these requirements.
The following section discusses related work on role-based
and attribute-based access control while section III gives a brief
overview of the access control specification language RASP.
Section IV gives a formal definition of the access rules in our
model and section V defines the instantaneous state of the
RBAC model together with the four operations for
transforming the state. Section VI describes the transitive
acquisition of roles and attributes and defines the function
allow for checking whether an operation on an object is
permitted. Section VII defines some further useful constructs
of RASP and section VIII addresses some issues of efficient
implementation. We conclude with a summary of the findings
and contributions of the paper.
II. RELATED WORK
Both the object-based access control paradigm [8] and the
role-based access control paradigm [14] are well-known
approaches as is the combination of the two to define access to
an object in terms of the methods which can be invoked by
subjects acting in a certain role. A number of significant
extensions to the basic RBAC model have been suggested in
order to adequately handle the complexities of minimal access
control requirements in real-world scenarios. These include
role hierarchies [15] and role parameters [7].
A question which has received much less attention is how
to group objects so that the access constraints for the whole
group can be specified in a single place rather than repeating
them for each and every object. The Ponder policy
specification language [4] supports a hierarchical structure of
domains and sub-domains of objects similar to a file system
hierarchy. The leaves of the tree are references to objects rather
than the objects themselves so that an object can appear in a
number of different domains. This approach assumes that the
domains are relatively static and that an administrator will
place objects into domains via some mechanism external to the
language. Case studies have shown, however, that the domains
of an object may depend on object attributes which change in
the same way that the role of a subject may change. These
transitions require the same level of specification as to who can
effect the change as is required for role changes. The approach
of Generalized Role-Based Access Control [3] recognizes the
need for symmetry between subject roles and object roles but
does so on the basis of a very simple model which does not
support role parameters or control over role transitions.
Attribute-based access control (ABAC) [19][20] was
developed to support access to web services based on provable
attributes of a user rather than the identity of the user. This is
important for anonymity in using such services but is not
appropriate for organizations or systems where fine-grained
access-control policies are based on identity and roles. ABAC
has been extended to include attributes for resources as well as
subjects but does not address attribute transitions.
A further important question concerns the acquisition of
access rights. Ponder is a delegation-based system. It provides
for delegation policies which limit which access rights a
subject can pass to another subject but the basic assumption is
that the possessor of a right decides if and when another subject
should gain that right. Case studies show that it is often
necessary that access rights be granted by someone who does
not possess them him/herself. The OASIS Role Definition
Language [17] allows for this kind of appointment-based
acquisition of access rights and for role acquisition pre-
conditions based on external certificates known as auxiliary
credential certificates. OASIS RDL does not however allow for
a distinction between the case where a new role is replacing a
previous role and the case where the new role is additional.
This distinction has been found to be useful both for role
transitions and for object attribute transitions. OASIS RDL also
does not allow for the generation of new credential certificates
as a result of operations performed within the system.
Ponder supports both positive and negative authorizations.
In fact, it has two forms of negative access control clause:
negative authorization policies and refrain policies. So, for
example, a set of access rights can be granted to a group of
subjects via a positive authorization policy and then one of the
rights can later be revoked from a certain member of the group
via a negative authorization policy. Negative authorizations
lead to the problem of potential inconsistencies and loopholes
in an access control system. A more elegant way to express this
kind of partial revocation is to use role transition to transfer a
subject from one role into a new role which has a more
restricted set of rights.
The access control specification languages and mechanisms
described in this section represent the state-of-the-art in fine-
grained access control. Many of them have no formal definition
at all and none of them can support all of the requirements
which case studies show to be required. A formal definition of
role hierarchies is given in [15] and role parameters are
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 152-159
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
153
formally defined in [7] but no formal definition of a symmetric
approach to role and attribute transitions has been given in the
literature to date.
III. OVERVIEW OF RASP
The three main constructs of the RASP access specification
language are the appoint clause, the attribute clause
and the allow clause. The appoint clause specifies that a
subject with a certain role can appoint someone else to have a
certain role. The precondition for this is that the person being
appointed already possesses a certain role before the
appointment. So, for example:
appoint manager: staff -> deptHead;
expresses the appointment rule that someone who is a manager
can appoint someone who is already a staff member to be a
head of department.
The attribute clause is used to label an object in the
system with a certain attribute. This is done by specifying what
role a subject must possess to be able to do this and the
precondition that the object must already have a certain
attribute. So, for example:
attribute admin: document -> obsolete;
expresses the attribute rule that someone who is an
administrator can label a document as being obsolete.
For both the appoint and the attribute clauses, the
transition symbol -> indicates that the old role or attribute
should be retained in addition to the new one, whereas the
transition symbol /-> can be used to express that the old
role or attribute should be relinquished.
The third main construct is the allow clause. This
specifies that someone with a certain role can invoke a certain
operation on objects with a certain attribute or set of attributes.
So, for example:
allow deptHead!obsolete.delete;
expresses the access rule that a head of department can delete
an obsolete document.
RASP also provides a conflict clause which can be
used to express the rule that two roles are in conflict with each
other (if possessed by the same subject at the same time) and a
unique clause which expresses the rule that a certain role
may only be possessed by one subject at a time.
This overview of RASP will suffice for the purposes of this
paper but for more detail on the rationale for and the design of
the RASP language, the reader is referred to [5]. A summary of
the syntax of the constructs discussed in this paper can be
found in Appendix A.
IV. ACCESS RULES
We now extend the RBAC formalism sketched in the
introduction to a more powerful model which is capable of
expressing the semantics of the RASP constructs described
above. In this section, we define the relatively static aspect of
our access control model, i.e. what access does a subject with a
certain role have to an object with a certain set of attributes,
who has the authority to appoint subjects to roles and who has
the authority to label objects with attributes.
We define this as the 5-tuple:
(X, P, T, L, U)
where
X 2

is a set of permissions
P
3
is a set of appointment rules
T
3
is a set of role transition rules
L
2
is a set of attribute labeling rules and
U
2
is a set of attribute transition rules.
A permission triple (r, A, m) X, A specifies that a
subject with the role r can access an object via the method m if
that object has all of the attributes in the set A. Examples are:
(admin, {thisFacility, patientPersonalDetails}, update)
(secretCleared, {secret}, read)
These express the semantic value of the RASP syntax:
allow admin!
{thisFacility, patientPersonalDetails}.
update; and

allow secretCleared!secret.read;
respectively (given the obvious mapping from an identifier
admin to the role radmin etc.).
An appointment triple (r
1
, r
2
, r
3
) P specifies that a
subject with the role r
1
can appoint a subject with the role r
2
to
additionally have the role r
3
. In this context, we denote the null
role (always possessed by all subjects) as . So, for example
we can have:
(manager, , employee)
(manager, employee, admin)
(manager, doctor, doctorAtThisFacility)
These express the semantic value of the RASP syntax:
appoint manager: someone -> employee;
appoint manager: employee -> admin;
appoint manager: doctor ->
doctorAtThisFacility;
where the identifier someone is used to denote the null role .
Similarly, an attribute labeling triple (r, a
1
, a
2
) L
specifies that a subject with the role r can label an object with
the attribute a
1
as also having the attribute a
2
. Again, we
denote a null attribute as . For example:
(sysadmin, , thisFacility)
(sysadmin, thisFacility, patientPersonalDetails)
These express the semantic value of the RASP syntax:
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 152-159
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
154
attribute sysadmin:
something -> thisFacility;
attribute sysadmin: thisFacility ->
patientPersonalDetails;
A role transition triple (r
1
, r
2
, r
3
) T specifies that a
subject with the role r
1
can cause a subject with the role r
2
to
lose that role and take on the role r
3
instead. For example,
(manager, traineeEmployee, employee)
(clearanceOfficer, secretCleared, topSecretCleared)
(manager, employee, )
These express the semantic value of the RASP syntax:
appoint manager: traineeEmployee /->
employee;
appoint clearanceOfficer: secretCleared
/-> topSecretCleared;
appoint manager: employee /-> someone;
Note that in the last example, this kind of role transition is
used to remove a role from a subject.
Finally, an attribute transition triple (r, a
1
, a
2
) U
specifies that a subject with the role r can cause an object with
the attribute a
1
to lose that attribute and take on the attribute a
2

instead. For example:
(admin, draftReport, report)
(manager, thisFacility, thatFacility)
(declassificationOfficer, secret, unclassified)
These express the semantic value of the RASP syntax:
attribute admin:
draftReport /-> report;
attribute manager: thisFacility /->
thatFacility;
attribute declassificationOfficer:
secret /-> unclassified;
V. ACCESS STATE
We now define the second part of the model, which
determines for some point in time, which subject has which
roles and which object has which attributes. This is represented
via a set of role appointment certificates and a set of attribute
labeling certificates. Formally, the state of the access control
system is given by:
(C, D)
where
C
2
is a set of appointment certificates and
D
2
is a set of label certificates.
The certificate (s, r
1
, r
2
) C specifies that if the subject s
has the role r
1
, then that subject also has the role r
2
. Thus:
(Fred, , traineeEmployee)
(Fred, employee, admin)
Similarly, the certificate (o, a
1
, a
2
) D specifies that if the
object o has the attribute a
1
, then that object also has the
attribute a
2
.
We define four functions which update the state of the
access control system. Function addRole(C, s, r
1
, r
2
) is
used to add an appointment certificate to C and is defined as:
addRole(C, s, r
1
, r
2
) = C {(s, r
1
, r
2
)}
Function modRole(C, s, r
1
, r
2
) is used to change some
of the appointment certificates in C and can be defined
recursively as:
if C contains an appointment certificate of the form
(s, r, r
1
), for some r then
modRole(C, s, r
1
, r
2
) =
{(s, r, r
2
)} modRole(C \ {(s, r, r
1
)}, s, r
1
, r
2
)
otherwise
modRole(C, s, r
1
, r
2
) = C
So, for example, if C contains the certificate:
(Fred, , traineeEmployee)
then modRole(C, Fred, traineeEmployee, employee) will
instead contain the certificate:
(Fred, , employee)
Function addAttr(D, o, a
1
, a
2
) is used to add a label
certificate to D and is defined as:
addAttr(D, o, a
1
, a
2
) = D {(o, a
1
, a
2
)}
Finally, function modAttr(D, o, a
1
, a
2
) is used to change
some of the label certificates in D and is defined as:
if D contains a label certificate of the form
(o, a, a
1
), for some a then
modAttr(D, o, a
1
, a
2
) =
{(o, a, a
2
)} modAttr(D \ {(o, a, a
1
)}, o, a
1
, a
2
)
otherwise
modAttr(D, o, a
1
, a
2
) = D
Note that a subject s may invoke addRole(C, s
1
, r
1
, r
2
)
only if s has a role r such that (r, r
1
, r
2
) P. Similarly, s can
invoke modRole(C, s
1
, r
1
, r
2
) only with a role r such that
(r, r
1
, r
2
) T. Likewise, s can invoke addAttr(D, o, a
1
,
a
2
) only if s has a role r where (r, a
1
, a
2
) L and
modAttr(D, o, a
1
, a
2
) only with a role r such that (r, a
1
, a
2
)
U. The exact definition of having a role is given in the next
section.
VI. DETERMINING ROLES AND ATTRIBUTES
From the definitions in the previous section, it can be seen
that, rather than just representing the set of roles possessed by a
subject at some point in time, our model represents the role
from which each role is derived. We define the notation
r, r
s
to represent that the subject s has the role r
conditional on having the role r, i.e.:
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 152-159
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
155
r
1
r
n
. (s, r, r
1
) C (s, r
1
, r
2
) C
(s, r
n-1
, r
n
) C (s, r
n
, r) C
It can be seen that this conditional possession of roles is
then a transitive relationship, i.e.
r
1
, r
2

s
r
2
, r
3

s
r
1
, r
3

s

The actual possession of a role can then expressed as:
, r
s


Similarly, for attributes of objects, we define a, a
o
to
mean that the object o has the attribute a conditional on
having the attribute a. So, an object actually possesses an
attribute if:
, a
o

Finally, we can define the allow function which
determines whether a subject s can access an object o via a
method m as:
allow(s, m, o) = r, A.
, r
s
aA., a
o
(r, A, m)X

The fact that the model represents the possession of a role
or attribute as conditional on possession of another role or
attribute is very important for an adequate level of access
control in real-world information systems. Suppose, for
example, the set C contains the appointment certificates:
(Fred, , doctor) and
(Fred, doctor, doctorAtThisFacility)
If Fred were to lose the role of doctor (for example by
being struck off the medical register for some reason), we
would want him to also automatically lose the role of
doctorAtThisFacility with all its associated permissions. This
is only possible if the model represents the derivation of the
second role from the first. Similarly, for the labeling
certificates:
(DocumentAbc, , Australian) and
(DocumentAbc, Australian, Sydney)
we want the document to automatically lose the attribute
Sydney if it loses the attribute Australian. This illustrates
that the transitive nature of our model can be used to support
specialization hierarchies of roles and attributes. An example
for roles is:
(Fred, , sysadmin) and
(Fred, sysadmin, linuxSysadmin)
A further advantage of our approach is that the order of
adding roles becomes more flexible. So, for example, if we
have the certificates:
(Fred, , traineeEmployee) and
(Fred, employee, admin)
then this represents the fact that Fred does not yet have the
admin role but will acquire that role as soon as he becomes a
(fully fledged) employee. The operation:
modRole(C, Fred, traineeEmployee, employee)
will then make him an admin as well as an employee.
Lastly, our representation of role appointment certificates
supports explicit certificates which represent a precondition
(e.g. for employment) which is imported from, or accessed at,
an external source. For example, the certificate:
(Fred, , doctor)
should ideally be maintained by an external body such as a
national medical association rather than in the organization
where the doctor is working. Our model provides for an
explicit representation of such an external qualification
certificate. (Of course, in an implementation which transfers or
accesses this from an external site, it would need to be secured
by a mechanism such as public-key cryptography, digital
signatures and unique subject identifiers.)
VII. FURTHER FEATURES OF RASP
The main constructs of RASP are the appoint, the
attribute and the allow clauses as defined above but we
can also use the formal model to define the semantics of two
other constructs which can be important for restricting role
appointments in the information systems of real organizations.
The first of these is a clause which specifies that it is a
conflict for someone to be fulfilling two certain roles in the
organization at the same time. So, for example, it may be
considered a conflict for someone to be both a student and a
staff member of a university at the same time. The syntax for
expressing this in RASP is:
conflict staff, student;
We can formally describe the semantics of this by defining
functions addRoleCheckConflict(C, s, r
1
, r
2
) and
modRoleCheckConflict(C, s, r
1
, r
2
) which extend
addRole(C, s, r
1
, r
2
) and modRole(C, s, r
1
, r
2
) by adding
a check for a breach of the constraint whenever the set of
appointment certificates is updated. The definitions of these
functions for the conflict roles role_id1 and role_id2 are
then:
addRole
checkconflict
(C, s, r
1
, r
2
) =
if s . , r
role_id1

s
, r
role_id2

s
in
addRole(C, s, r
1
, r
2
) then:
error
otherwise:
addRole(C, s, r
1
, r
2
)

International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 152-159
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
156
and

modRole
checkconflict
(C, s, r
1
, r
2
) =
if s . , r
role_id1

s
, r
role_id2

s
in
modRole(C, s, r
1
, r
2
) then:
error
otherwise:
modRole(C, s, r
1
, r
2
)

The second construct is a clause that specifies that only a
single subject may have a certain role at one time. So, for
example, we can specify that these can only be one subject
with the role manager at one time by the clause:
unique manager;
Again, we can formally define this construct by defining
the functions addRoleCheckUnique(C, s, r
1
, r
2
) and
modRoleCheckUnique(C, s, r
1
, r
2
) which check for a
breach of the constraint whenever the set of appointment
certificates is updated. The definitions for a unique role
role_id are:
addRole
checkunique
(C, s, r
1
, r
2
) =
if s1,s2s1 .
, r
role_id

s1
, r
role_id

s2
in
addRole(C, s, r
1
, r
2
) then:
error
otherwise:
addRole(C, s, r
1
, r
2
)

and

modRole
checkunique
(C, s, r
1
, r
2
) =
if s1,s2s1 .
, r
role_id

s1
, r
role_id

s2
in
modRole(C, s, r
1
, r
2
) then:
error
otherwise:
modRole(C, s, r
1
, r
2
)

One concept of RASP that the model presented in this
paper does not yet support is that of role parameters. We have
deliberately excluded this concept, not because we consider it
to be unnecessary or unimportant, but for the sake of brevity
and of clearly describing the basic model without this
complicating factor. Role parameters can however be
integrated into our model and a future paper will discuss this.
Existing models for role parameters such as in [7] are not
sufficient for RASP since they do not describe role transitions
or transitive role relationships and also do not relate the role
parameters to attributes of the protected objects.
A summary of the mappings from RASP syntax to their
semantics as expressed in the formal model is given in
Appendix B.
VIII. IMPLEMENTATION CONSIDERATIONS
While this paper is concerned with a general model rather
than a specific implementation, it is nevertheless important that
any access control scheme be implementable with realistic
overheads for the checking of permissions. If the definition of
the allow(s, m, o) function in the previous section were to be
evaluated in that form for every attempted invocation of a
method on an object, then unacceptable delays would be
incurred. Similarly, if the rules were to be preprocessed to a
central access control matrix for all subjects and all objects
then that would incur a high overhead each time a certificate
was added or changed.
Fortunately, neither of these extremes is necessary. Firstly,
most subjects will be interested in only a small fraction of the
total number of objects and secondly, the system need only be
concerned with the subjects who are currently using it. Thirdly,
although the number of subjects and objects in an organization
may be large, the number of roles and attributes and therefore
the number of rules will generally be fairly small, even for a
fine-grained access scheme. Also the kinds of operations to
which the scheme is applied will generally be high-level
operations like, read, edit or update on documents or
databases and so will not be extremely frequent.
Finally, rather than calculate the entire set of roles allowed
for a subject, it is actually preferable for each subject to acquire
only the role when they start a session and then explicitly
request any further role they wish to adopt for that session.
This means that only those roles need be checked against the
rules rather than all possible roles for that subject. The reason
this is preferable is that it allows a log to be maintained of
exactly who is acting in which role at what time.
An implementation could thus work along the following
lines:
assign the role to a subject who starts a session
when a subject requests to act in a further role:
o check for a certificate which allows this
o if allowed, determine the attribute sets
associated with this role in the permission
rules
allow a subject to searches for objects with those
sets of attributes:
when a subject selects a certain object
o use the permission rules for the current
roles to determine which operations can
be performed on the object
Given appropriate index tables for the rule and certificate
information, none of these individual steps need incur an
unacceptable overhead.
IX. CONCLUSION AND FUTURE WORK
Case studies show that information systems often require a
degree of access control which cannot be expressed simply as a
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 152-159
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
157
static mapping from subjects to roles and from roles to
operations on objects.
In this paper, we have formally defined a role-based access
control model which has a much greater expressive power and
which, in particular, can be used to formally describe the
semantics of the RASP access specification language.
The model supports controlled dynamic acquisition of new
roles, transitions from one role to another and role revocation.
It also supports labeling of objects with attributes in a way
analogous to appointing subjects to roles and defines
permissions in terms of roles and attribute sets.
We have defined the access control model in two parts. The
first represents the rules for role appointment and attribute
labeling as well as role and attribute transitions and access
permissions. The second part of the model represents the
instantaneous state of the access system in terms of a set of
appointment certificates and a set of labeling certificates. We
have defined four functions for updating these sets.
A significant aspect of the model is the use of transitive
relationships whereby a certificate represents the fact that the
possession of a role or attribute may be conditional on the
possession of another role or relationship. This allows the
model to support role and attribute specialization hierarchies,
controlled revocation of derived roles/attributes and flexibility
in the addition of roles.
No existing formal model for role-based access control
supports all the concepts captured in our model.
REFERENCES
[1] D.E. Bell and L.J. La Padula, Secure computer systems: unified
exposition and Multics interpretation, MTR-2997, The MITRE
Corporation, 1975.
[2] B. Blobel, Authorisation and access control for electronic health record
systems, International Journal of Medical Informatics, 73, 2004.
[3] M.J. Covington, M.J. Moyer and M. Ahamad, Generalized role-based
access control for securing future applications, Proc. 23rd National
Information Systems Security Conference, Baltimore, 2000.
[4] N. Damianou, N. Dulay, E. Lupu and M. Sloman, Ponder: A language
for specifying security and management policies for distributed
systems, The Language Specification Version 2.3, Imperial College
Research Report DoC 2000/1, 2000.
[5] M. Evered, Rationale and Design of the Access Specification Language
RASP, Intl. Journal of Cyber-Security and Forensics, 1, 1, 2012.
[6] M. Evered and S. Bgeholz, A case study in access control
requirements for a health information system, Proc. Australasian
Information Security Workshop, Dunedin, 2004.
[7] J.H. Hine, W. Yao, J. Bacon and K. Moody, An architecture for
distributed OASIS services, Proc. Middleware 2000, Lecture Notes in
Computer Science, Vol. 1795, Springer-Verlag, Heidelberg/New York,
2000.
[8] A. Jones and B. Liskov, A language extension for expressing
constraints on data access. Communications of the ACM, 21(5):358-
367, May, 1978.
[9] T. Moses (Ed.), Extensible Access Control Markup Language (XACML)
Version 2.0, OASIS Consortium, 2005.
[10] Object Management Group, Resource Access Decision Facility
Specification, Version 1.0, 2001.
[11] Object Management Group, Object Constraint Language Specification
Version 2.0, 2006.
[12] G. Russello, C. Dong and N. Dulay, Authorisation and conflict
resolution in hierarchical domains, Proc. 8th IEEE Workshop on
Policies for Distributed Systems and Networks, Bologna, 2007.
[13] J.H. Saltzer, Protection and the control of information sharing in
Multics, Symposium on Operating System Principles, Yorktown
Heights, NY, 1973.
[14] R. Sandhu, E.J. Coyne, H.L. Feinstein and C.E. Youman, Role based
access control models, IEEE Computer 29 (2), 1996.
[15] R. Sandhu, Role activation hierarchies, Proc. 3rd ACM Workshop on
Role-Based Access Control, Fairfax, 1998.
[16] M.C. Tschantz and S. Krishnamurthi, S. Towards reasonability
properties for access conrol policy languages, Proc. 11th ACM
Symposium on Access Control Models and Technologies, Lake Tahoe,
2006.
[17] W. Yao, K. Moody and J. Bacon, A model of OASIS role-based access
control and its support for active security, ACM Transactions on
Information and System Security, 5, 4, 2001.
[18] P. Yu and H. Yu, H., Lessons learned from the practice of mobile
health application development, Proc. 28th Annual International
Computer Software and Applications Conference, Hong Kong, 2004.
[19] T. Yu, X. Ma and M. Winslett, Prunes: an efficient and complete
strategy for automated trust negotiation over the internet, Proc. 7th
ACM conference on Computer and communications security.ACM
Press, 2000.
[20] E. Yuan and J. Tong, Attributed based access control (ABAC) for web
services, Proc. IEEE International Conference on Web Services, 2005.


Appendix A Concrete syntax of relevant RASP
constructs


clause: appoint_clause |
attribute_clause |
allow_clause |
conflict_clause |
unique_clause

appoint_clause: 'appoint'
role_id ':'
role_id transition role_id ';'



transition: '->' | '/->'

attribute_clause: 'attribute'
role_id ':'
attribute_id transition
attribute_id ';'

allow_clause: 'allow' role_id '!'
action ';'
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 152-159
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
158

action: attribute_id . operation_id

action: { attribute_list } .
operation_id

attribute_list: attribute_id
{ , attribute_id }

conflict_clause: 'conflict'
role_id ',' role_id ';'

unique_clause: 'unique' role_id ';'


Appendix B Summary of semantic mappings


appoint role_id1 :
role_id2 -> role_id3

P = P (rrole_id1, rrole_id2, rrole_id3) }


appoint role_id1 :
role_id2 /-> role_id3

T = T (rrole_id1, rrole_id2, rrole_id3) }


attribute role_id :
attr_id1 -> attr_id2

L = L (rrole_id, aattr_id1, rattr_id2) }


attribute role_id :
attr_id1 /-> attr_id2

U = U (rrole_id, aattr_id1, rattr_id2) }


allow role_id !
attr_id . op_id ;

X = X (rrole_id, aattr_id}, mop_id) }

allow role_id !
{ attr_id1 ,
attr_id2 ,
attr_idn }
. op_id ;

X = X (rrole_id, aattr_id1, aattr_id2, aattr_idn}, mop_id) }


conflict role_id1 , role_id2 ;

addRole
checkconflict
(C, s, r
1
, r
2
) =
if s . , r
role_id1

s
, r
role_id2

s
in
addRole(C, s, r
1
, r
2
) then:
error
otherwise:
addRole(C, s, r
1
, r
2
)

and

modRole
checkconflict
(C, s, r
1
, r
2
) =
if s . , r
role_id1

s
, r
role_id2

s
in
modRole(C, s, r
1
, r
2
) then:
error
otherwise:
modRole(C, s, r
1
, r
2
)


unique role_id ;

addRole
checkunique
(C, s, r
1
, r
2
) =
if s1,s2s1 .
, r
role_id

s1
, r
role_id

s2
in
addRole(C, s, r
1
, r
2
) then:
error
otherwise:
addRole(C, s, r
1
, r
2
)

and

modRole
checkunique
(C, s, r
1
, r
2
) =
if s1,s2s1 .
, r
role_id

s1
, r
role_id

s2
in
modRole(C, s, r
1
, r
2
) then:
error
otherwise:
modRole(C, s, r
1
, r
2
)





International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 152-159
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
159

You might also like