You are on page 1of 52

OMB Circular A-130 was developed to meet information resource management requirements

for the federal government. According to this circular, independent audits should be performed
every three years.

The Sarbanes-Oxley Act (SOX) was developed to ensure that financial information on publicly
traded companies is accurate.

The Health Insurance Portability and Accountability Act (HIPAA) was developed to establish
national standards for the storage, use, and transmission of health care data.

The Gramm-Leach-Bliley Act (GLBA) of 1999 was developed to ensure that financial institutions
protect customer information and provide customers with a privacy notice.

Gap analysis for transactions identifies and matches the data content required by the Health
Insurance Portability and Accountability Act (HIPAA). With reference to HIPAA, a gap analysis
defines the current status of the organization in a specific area and compares the current
operations to the requirements mandated by the state or the federal law.

A gap analysis for the transactions set refers to the practice of identifying the data content that
is currently available through the medical software, comparing the content to the guidelines
dictated by the HIPAA, and ensuring that there is a match. It involves studying the specific
format of regulated transactions to ensure that the order of the information that is sent
electronically matches the order mandated by the implementation guides.

A gap analysis for security refers to the practice of identifying the security policies and practices
currently in place in the organization to protect the data from unauthorized access, alteration,
and disclosure. Gap analysis involves a comparison of the implementation of the current
practices with the requirements of the HIPAA security regulations.

HIPAA gap analysis applies to transactions, security, and privacy and does not address either
accountability or availability.

Routers and encryption are examples of preventative technical controls. A technical control is a
control that restricts access. A preventative control prevents security breaches. Routers and
encryption are also compensative technical controls.

Preventative technical controls are most often configured using access control lists (ACLs) built
into the operating system. They protect the operating system from unauthorized access,
modification, and manipulation. They protect system integrity and availability by limiting the
number of users and processes that are allowed to access the system or network.

1
A recovery technical control can restore system capabilities. Data backups are included in this
category.

A detective technical control can detect when a security breach occurs. Audit logs and intrusion
detection systems (IDSs) are included in this category.

A deterrent technical control is one that discourages security breaches. A firewall is the best
example of this type of control.

A corrective technical control is one that corrects any issues that arise because of security
breaches. Antivirus software and server images are be included in this category as well.

A compensative technical control is one that is considered as an alternative to other controls.

There are three categories of access control: technical, administrative, and physical controls. A
technical control is a control that is put into place to restrict access. Technical controls work to
protect system access, network architecture and access, control zones, auditing, and encryption
and protocols. An administrative is developed to dictate how security policies are implemented
to fulfill the company's security goals. Administrative controls include policies and procedures,
personnel controls, supervisory structure, security training, and testing. A physical control is a
control that is implemented to secure physical access to an object, such as a building, a room,
or a computer. Physical controls include badges, locks, guards, network segregation, perimeter
security, computer controls, work area separation, backups, and cabling.

The three access control categories provide seven different functionalities or types:

 Preventative - A preventative control prevents security breaches and avoids risks.


 Detective - A detective control detects security breaches as they occur.
 Corrective - A corrective control restores control and attempts to correct any damage
that was inflicted during a security breach.
 Deterrent - A deterrent control deters potentials violations.
 Recovery - A recovery control restores resources.
 Compensative - A compensative control provides an alternative control if another control
may be too expensive. All controls are generally considered compensative.
 Directive - A directive control provides mandatory controls based on regulations or
environmental requirements.

Each category of control includes controls that provide different functions. For example, a
security badge is both a preventative physical control and a compensative physical control.
Monitoring and supervising is both a detective administrative control and a compensative
administrative control.

A baseline defines the minimum level of security and performance of a system in an


organization. A baseline is also used as a benchmark for future changes. Any change made to
the system should match the defined minimum security baseline. A security baseline is defined
through the adoption of standards in an organization.

2
Guidelines are the actions that are suggested when standards are not applicable in a particular situation.
Guidelines are applied where a particular standard cannot be enforced for security compliance.
Guidelines can be defined for physical security, personnel, or technology in the form of security best
practices.

Standards are the mandated rules that govern the acceptable level of security for hardware and software.
Standards also include the regulated behavior of employees. Standards are enforceable and are the
activities and actions that must be followed. Standards can be defined internally in an organization or
externally as regulations.

Procedures are the detailed instructions used to accomplish a task or a goal. Procedures are considered
at the lowest level of an information security program because they are closely related to configuration
and installation problems. Procedures define how the security policy will be implemented in an
organization through repeatable steps. For instance, a backup procedure specifies the steps that a data
custodian should adhere to while taking a backup of critical data to ensure the integrity of business
information. Personnel should be required to follow procedures to ensure that security policies are fully
implemented.

Procedural security ensures data integrity.

EAL 4 is the common benchmark for operating systems and products. Common Criteria has
designed the evaluation criteria into seven EALs:
 EAL 1 - A user wants the system to operate but ignores security threats.
 EAL 2 - Developers use good design practices but security is not a high priority.
 EAL 3 - Developers provide moderate levels of security.
 EAL 4 - Security configuration is based on good commercial development. This level is the
common benchmark for commercial systems, including operating systems and products.
 EAL 5 - Security is implemented starting in early design. It provides high levels of security
assurance.
 EAL 6 - Specialized security engineering provides high levels of assurance. This level will be
highly security from penetration attackers.
 EAL 7 - Extremely high levels of security are provided. This level requires extensive testing,
measurement, and independent testing.

An escalation of privileges attack occurs when an attacker has used a design flaw in an
application to obtain unauthorized access to the application. There are two type of privilege
escalation: vertical and horizontal. With vertical privilege escalation, the attacker obtains higher
privileges by performing operations that allow the attacker to run unauthorized code. With
horizontal privilege escalation, the attacker obtains the same level of permissions as he already
has but uses a different user account to do so.

A backdoor is a term for lines of code that are inserted into an application to allow developers to
enter the application and bypass the security mechanisms. Backdoors are also referred to as
maintenance hooks.

3
A buffer overflow occurs when an application erroneously allows an invalid amount of input in
the buffer.

Encryption technologies, such as Pretty Good Privacy (PGP), are confidentiality services, which
are provided to protect the contents of files from hackers.

Digital signatures can be used to protect the integrity of files by ensuring that the files are not
changed in transit. A digital signature provides authentication (knowing who really sent the
message), integrity (because a hashing algorithm is involved), and nonrepudiation (the sender
cannot deny sending the message). Most types of Redundant Array of Independent Disks
(RAID) arrays are availability services that are designed to ensure that data remains available
even in the case of a hardware failure. Authentication schemes are accountability systems,
which are designed to identify users on a computer network.

Federal Information Processing Standards (FIPS) Publication 140 is a United States federal
standard that specifies security requirements for hardware and software cryptographic modules.
The requirements that were published by the National Institute of Standards and Technology
(NIST) apply not only to cryptographic modules but also to the corresponding documentation.
The use of hardware and software cryptographic modules is required by the United States for all
unclassified implementation of cryptography.

The four increasing levels of security in FIPS are as follows:


Level 1 requires very limited security requirements and specifies that all components must be
production grade.
Level 2 specifies the security requirements of role-based authentication and physical tamper
evidence.
Level 3 requires identity-based authentication and physical tamper resistance, making it difficult
for attackers.
Level 4 specifies robustness against environmental attacks.
It is important to note that FIPS not only deals in cryptographic software but also in hardware
modules. The U.S. Government and other prominent institutions use the hardware and software
modules validated by FIPS 140.

The FIPS 140-1 and FIPS 140-2 validation certificates that are issued contain the following
elements:
module name

4
module type, that is, hardware, software, and firmware
version

The options stating that Secure Hash Algorithm-1 (SHA-1) produces a 128-bit hash value and is
a two-way hash function of variable length are NOT true. SHA-1 is a one-way function that
produces a fixed 160-bit hash value. It ensures the integrity of the message by computing a
message digest. SHA-1 processes data in block lengths of 512 bits.

SHA was designed by National Institute of Standards and Technology (NIST) and National
Security Agency (NSA) to be used in digital signatures.

SHA-1 is not an encryption algorithm; it is a hashing algorithm. Encryption algorithms are used
to encrypt messages and files. Hashing algorithms is used to provide a message or file
fingerprint to ensure the message or file has not been altered.

After creating a cryptographic key, you should initialize the key by setting all of its core
attributes.
The steps in the cryptographic key life cycle are as follows:
Creation
Initialization
Distribution
Activation
Inactivation
Termination

You should use Blowfish. Blowfish is a symmetric algorithm that is considered public domain. It
can be used freely by anyone.
Digital Encryption Standard (DES), Triple DES (3DES), and International Data Encryption
Algorithm (IDEA) are not considered public domain.
Symmetric algorithms include DES, 3DES, IDEA, Blowfish, Twofish, RC4, RC5, RC6, Advanced
Encryption Standard (AES), SAFER, and Serpent. Asymmetric algorithms include Diffie-
Hellman, RSA, ElGamal, Elliptic Curve Cryptosystem (ECC), LUC, Knapsack, and Zero
Knowledge Proof.

5
The cut-through method copies a frame's destination address to the switch's buffer and then
sends the frame to its destination. This method results in reduced latency compared to switches
using the store-and-forward method. Latency is essentially the delay that occurs while the frame
traverses the switch. The cut-through switching method generally has less latency, and
maintains constant latency since the switch forwards the frame as soon as it reads the
destination address. This results in faster frame processing through the switch. However,
switches configured to use the cut-through method do not perform any error checking.

The store-and-forward method copies an entire frame to its buffer, computes the cyclic
redundancy check (CRC), and discards frames containing errors as well as runt frames (less
than 64 bytes) and giant frames (greater than 1,518 bytes). Because the switch must receive
the entire frame before forwarding, latency through the switch varies with the frame length. This
causes more latency compared to switches using the cut-through method.

You should base your decision as to which switching method to use on a network on whether
error checking or consistent latency is the bigger concern. Configure your switches to use the
store-and-forward switching method rather than the cut-through switching method when you
want the switches to perform error checking and you do not mind inconsistent latency or slower
throughput. Configure your switches to use the cut-through switching method when you need
constant latency or faster throughput, and do not need error checking.

In addition to the two main switching methods, cut-through and store-and-forward, there is also
a modified cut-through method known as "fragment-free." Because collisions normally occur
within the first 64 bytes of a frame, fragment-free reads these bytes before forwarding the frame.
This allows the fragment-free method to filter out collision frames.

Synchronous Data Link Control (SDLC) and High-level Data Link Control (HDLC) are primarily
used to enable IBM mainframes to communicate with remote computers. A synchronous
protocol, SDLC, is used over networks with permanent connections. Mainframe environments
are generally considered more secure than LAN environments because there are fewer entry
points to a mainframe.
HDLC is an extension of SDLC. HDLC provides higher throughput than SDLC by supporting full-
duplex transmissions. SDLC does not support full duplex.
Switched Multimegabit Data Service (SMDS) is a packet-switching protocol that can provide
bandwidth as demanded. It is used to connect across public networks. It has been replaced by
frame relay.
High-Speed Serial Interface (HSSI) is used to connect routers and multiplexers to ATM, frame
relay, and other high-speed services.

6
A wide area network (WAN) may provide access to interconnected network segments such as
extranets, intranets, demilitarized zones (DMZs), virtual private network (VPNs), and the
Internet.

Asymmetrical Digital Subscriber Line (ADSL) offers speeds up to 8 megabits per second (Mbps)
and provides faster download speed than upload speed.
High-bit-rate DSL (HDSL) offers speeds up to 1.544 Mbps over regular UTP cable.
ISDN DSL (IDSL) offers speeds up to 128 kilobits per second (Kbps).
Symmetrical DSL (SDSL) offers speeds up to 1.1 Mbps. Data travels in both directions at the
same rate.
Another type of DSL is Very high bit-rate Digital Subscriber Line (VDSL). VDSL transmits at
super-accelerated rates of 52 Mbps downstream and 12 Mbps upstream.

A capability corresponds to a row in the access control matrix. A capability is a list of all the
access permission that a subject has been granted.
An object is an entity in the access control matrix to which subjects can be granted permissions.
A column in an access control matrix corresponds to the access control list (ACL) for an object.
A row in an access control matrix corresponds to a subject's capabilities, not just the subject.
By storing a list of rights on each subject, the granting of capabilities is accomplished.

Screened subnet is another term for a demilitarized zone (DMZ). Two firewalls are used in this
configuration: one firewall resides between the public network and DMZ, and the other resides
between the DMZ and private network.
A screened host is a firewall that resides between the router that connects a network to the
Internet and the private network. The router acts as a screening device, and the firewall is the
screen host. This firewall employs two network cards and a single screening router.
A dual-homed firewall is one that has two network interfaces: one interface connects to the
Internet, and the other connects to the private network. One of the most common drawbacks to
dual-homed firewalls is that internal routing may accidentally become enabled.
A virtual private network (VPN) is not a physical network. As its name implies, it is a virtual
network that allows users connecting over the Internet to access private network resources
while providing the maximum level of security. An encrypted VPN connection should be used to
ensure the privacy and integrity of data that is transmitted between entities over a public
network, whether those entities are clients, servers, firewalls, or other network hardware.
Firewall architectures include bastion hosts, dual-homed firewalls, screened hosts, and
screened subnets.

7
Event ID 539 occurs when a user account is locked out.
Event ID 531 occurs when a user account is disabled. Event ID 532 occurs when a user
account has expired. Event ID 535 occurs when the account's password has expired.

The presentation step of the investigation process is being carried out. This step can include
documentation, expert testimony, clarification, mission impact statement, recommended
countermeasures, and statistical interpretation.

The collection step of the investigation process is not being carried out. This step can include
approved collection methods, approved software, approved hardware, legal authority, sampling,
data reduction, and recovery techniques.
The examination step of the investigation process is not being carried out. This step can include
traceability, validation techniques, filtering techniques, pattern matching, hidden data discovery,
and hidden data extraction.
The analysis step of the investigation process is not being carried out. This step can include
traceability, statistical analysis, protocol analysis, data mining, and timeline determination.
The proper steps in a forensic investigation are as follows:
Identification
Preservation
Collection
Examination
Analysis
Presentation
Decision

Snort is an intrusion detection system (IDS).

Nessus is a vulnerability assessment tool. Tripwire is a file integrity checker. Ethereal is a


network protocol analyzer.

When a system crashes, you should perform the following steps in this order:
1. Enter into single-user mode. (The computer may already be in this mode.)
2. Recover damaged file system files.
3. Identify the cause of the unexpected reboot, and repair the system as necessary.
4. Validate critical configuration and system files and system operations.
5. Reboot the system as normal.

8
A failure-resistant disk system (FRDS) protects against data loss due to disk drive failure. The
basic function of an FRDS is to protect file servers from data loss and a loss of availability due
to disk failure.

A failure-tolerant disk system (FTDS) protects against data loss due to external power failure
and loss of access to data due to power supply failure.

A disaster-tolerant disk system (DTDS) protects against loss of access to data due to power
supply failure.

Online transaction processing (OLTP) is used in this environment. OLTP is a transactional


technique used when a fault-tolerant, clustered database exists. OLTP balances transactional
requests and distributes them among the different servers based on transaction load. OLTP
uses a two-phase commit to ensure that all the databases in the cluster contain the same data.

Object Linking and Embedding Database (OLE DB) is a method of linking data from different
databases together. Open Database Connectivity (ODBC) is an application programming
interface (API) that can be configured to allow any application to query databases. Data
warehousing is a technique whereby data from several databases is combined into a large
database for retrieval and analysis.

A distributed denial-of-service (DDoS) attack occurred. A DDoS attack is an extension of the


denial-of-service (DoS) attack. In DDoS, the attacker uses multiple computers to target a critical
server and deny access to the legitimate users. The primary components of a DDoS attack are
the client, the masters or handlers, the slaves, and the target system. The initial phase of the
DDoS attack involves using numerous computers and planting backdoors that are controlled by
master controllers and referred to as slaves. Handlers are the systems that instruct the slaves to
launch an attack against a target host. Slaves are typically systems that have been
compromised through backdoors, such as Trojans, and are not aware of their participation in the
attack. Masters or handlers are systems on which the attacker has been able to gain
administrative access. The primary problem with DDoS is that it addresses the issues related to
the availability of critical resources instead of confidentiality and integrity issues. Therefore, it is
difficult to address the issues by using security technologies, such as SSL and PKI.

Launching a traditional DoS attack might not disrupt a critical server operation. Launching a
DDoS attack can bring down the critical server because the server is being overwhelmed with
the processing of multiple requests until it ceases to be functional. Stacheldraht, trinoo, and
tribal flow network (TFN) are examples of DDoS tools.

A land attack involves sending a spoofed TCP SYN packet with the target host's IP address and
an open port as both the source and the destination to the target host on an open port. The land
attack causes the system to either freeze or crash because the computer continuously replies to
itself.

9
A ping of death is another type of DoS attack that involves flooding target computers with
oversized packets, exceeding the acceptable size during the process of reassembly, and
causing the target computer to either freeze or crash. Other denial of service attacks named,
smurf and fraggle, deny access to legitimate users by causing a system to either freeze or
crash.

A DoS attack is an attack on a computer system or network that causes loss of service to users.
The DoS attack floods the target system with unwanted requests. It causes the loss of network
connectivity and services by consuming the bandwidth of the target network or overloading the
computational resources of the target system. The primary difference between DoS and DDoS
is that in DoS, a particular port or service is targeted by a single system and in DDoS, the same
process is accomplished by multiple computers.

There are other types of denial of service attacks such as buffer overflows, where a process
attempts to store more data in a buffer than amount of memory allocated for it, causing the
system to freeze or crash.

Policy

Requirement 1 - SECURITY POLICY - There must be an explicit and well-defined security policy enforced
by the system. Given identified subjects and objects, there must be a set of rules that are used by the
system to determine whether a given subject can be permitted to gain access to a specific object.
Computer systems of interest must enforce a mandatory security policy that can effectively implement
access rules for handling sensitive (e.g., classified) information.[7] These rules include requirements such
as: No person lacking proper personnel security clearance shall obtain access to classified information. In
addition, discretionary security controls are required to ensure that only selected users or groups of users
may obtain access to data (e.g., based on a need-to-know).

Requirement 2 - MARKING - Access control labels must be associated with objects. In order to control
access to information stored in a computer, according to the rules of a mandatory security policy, it must
be possible to mark every object with a label that reliably identifies the object's sensitivity level (e.g.,
classification), and/or the modes of access accorded those subjects who may potentially access the
object. Page 9 Accountability

Requirement 3 - IDENTIFICATION - Individual subjects must be identified. Each access to information


must be mediated based on who is accessing the information and what classes of information they are
authorized to deal with. This identification and authorization information must be securely maintained by
the computer system and be associated with every active element that performs some security-relevant
action in the system.

Requirement 4 - ACCOUNTABILITY - Audit information must be selectively kept and protected so that
actions affecting security can be traced to the responsible party. A trusted system must be able to record
the occurrences of security-relevant events in an audit log. The capability to select the audit events to be
recorded is necessary to minimize the expense of auditing and to allow efficient analysis. Audit data must
be protected from modification and unauthorized destruction to permit detection and after-the-fact
investigations of security violations.

10
Assurance Requirement 5 - ASSURANCE - The computer system must contain hardware/software
mechanisms that can be independently evaluated to provide sufficient assurance that the system
enforces requirements 1 through 4 above. In order to assure that the four requirements of Security Policy,
Marking, Identification, and Accountability are enforced by a computer system, there must be some
identified and unified collection of hardware and software controls that perform those functions. These
mechanisms are typically embedded in the operating system and are designed to carry out the assigned
tasks in a secure manner. The basis for trusting such system mechanisms in their operational setting
must be clearly documented such that it is possible to independently examine the evidence to evaluate
their sufficiency.

Requirement 6 - CONTINUOUS PROTECTION - The trusted mechanisms that enforce these basic
requirements must be continuously protected against tampering and/or unauthorized changes. No
computer system can be considered truly secure if the basic hardware and software mechanisms that
enforce the security policy are themselves subject to unauthorized modification or subversion. The
continuous protection requirement has direct implications throughout the computer system's life-cycle.

These fundamental requirements form the basis for the individual evaluation criteria applicable for each
evaluation division and class. The interested reader is referred to Section 5 of this document, "Control
Objectives for Trusted Computer Systems," for a more complete discussion and further amplification of
these fundamental requirements as they apply to general-purpose information processing systems and to
Section 7 for amplification of the relationship between Policy and these requirements.

The Control Objectives for Information and related Technology (CobiT) is a security framework
that acts as a model for IT governance and focuses more on operational goals.

The Committee of Sponsoring Organizations of the Treadway Commission (COSO) is a security


framework that acts as a model for corporate governance and focuses more on strategic goals.
The COSO framework is made up of the following components:

Control Environment
Risk Assessment
Control Activities
Information and Communication
Monitoring

International Standards Organization (ISO) 17799 is a standard that provides recommendations


on enterprise security. The domains covered in ISO 17799 are as follows:
Information security policy for the organization
Creation of information security infrastructure
Asset classification and control
Personnel security
Physical and environmental security
Communications and operations management
Access control
System development and maintenance
Business continuity management
Compliance

11
This standard shows security frameworks, such as CobiT and COSO, how to actually achieve
the security goals through best practices.

British Standard 7799 (BS7799) is the standard on which ISO 17799 is based.

The European Union Principles on Privacy state that the data gathered for private individuals
should only be used for the purpose for which it is collected.

According to the European Union (EU), the following principles pertain to the protection
of information regarding individuals:
 To reduce chances of data misuse, the reason for the data collection must be clearly
specified at the time of the data collection.
 The collected data must not be used for purposes other than the one initially specified.
 Only relevant information should be gathered and stored.
 The data is kept only for the time for which it is needed.
 The data should be accessible only to necessary individuals to ensure data protection.
 You should ensure that there is no unintentional leakage of data.
 The Privacy Act of 1974 ensures that the following conditions are met:
 Only authorized persons should have access to personal information.
 The personal records should be up-to-date and accurate.
 The security and confidentiality of personal records should be ensured.
 The Computer Security Act of 1987 that the following conditions are met:
 The federal agency should develop a security policy for sensitive computer systems.
 Individual employees should be trained on methods to operate and manage the
computer systems.
 Individual employees should be trained on acceptable computer practices.

The Economic Espionage Act of 1996 provides a framework to deal with espionage attacks on
corporations. According to the Act, all the assets of the organization, whether substantial or not,
require protection. The Federal Bureau of Investigation (FBI) investigates cases related to
corporate espionage.

Under the Computer Security Act of 1987, all U.S. federal agencies must identify computers that
contain sensitive information and develop a security plan for them. Regular security-awareness
training about the government-acceptable practices is conducted for the individuals who operate
and manage these systems.

The Health Insurance Portability and Accountability Act (HIPAA) is also known as Kennedy-
Kassebaum Act. The primary emphasis of HIPAA is on administration simplification through
improved efficiency in health care delivery. This simplification is achieved by standardizing
electronic data interchange and protection of confidentiality and security of health data. After
deployment, HIPAA preempts state laws, unless the state law is more stringent.

The U.S. Communications Assistance for Law Enforcement Act (CALEA) of 1994 preserves the
ability of law enforcement agencies to conduct electronic surveillance. This may require the

12
design modification of telecommunication equipment and services. In the United States, CALEA
describes how wireless and landline carriers should provide surveillance information to a law
enforcement monitoring center to enable the center to track activities.

The Economic Espionage Act of 1996 structured guidelines as to who should investigate a
crime. The U.S. law enforcement agencies, such as Federal Bureau of Investigation (FBI),
investigate industrial and corporate espionage acts under this law. This law implies that
protected assets include non-tangible assets, such as intellectual property. Theft is no longer
restricted to physical constraints. Investigating and prosecuting computer crimes is made more
difficult because evidence is mostly intangible.

Wiretapping is a passive attack. Wiretapping or eavesdropping is based on the fact that all
communication signals are vulnerable to passive listening. Wiretapping involves using either a
transmitting or a recording device to monitor the conversations between two individuals or
companies with or without the approval of either party. The following tools can be used to
intercept the communication:
Network sniffers
Telephone-tapping devices
Microphone receivers
Cellular scanners
Tape recorders
Many countries consider wiretapping illegal. Wiretapping is only acceptable if either
communicating party gives its consent for passive listening.

Wiretapping does not prohibit law enforcement officers from using search warrants against
suspects. The law enforcement officers have a court order that allows wiretapping on specific
individuals for relevant conversation only. The court order specifies the purpose of wiretapping
and the duration for which the conversation can be heard in conformity with the regulations of
The Privacy Act of 1974. The Privacy Act of 1974 stipulates that the disclosure of personal
information should be limited only to authorized persons. Wiretapping plays an important role in
military and foreign intelligence.

Because the development of new technology usually outpaces the law, law enforcement uses
embezzlement, fraud, and wiretapping laws in many cases of computer crime.

The Trusted Computer System Evaluation Criteria (TCSEC) -defined levels and the sublevels of
security are as follows:

 Verified protection offering the highest level of security


An A1 rating implies that the security assurance, design, development, implementation,
evaluation, and documentation of a computer is performed in a very formal and detailed
manner. An infrastructure containing A1-rated systems is the most secure environment and is
typically used to store highly confidential and sensitive information.

 Mandatory protection based on the Bell-LaPadula security model and enforced by the use of
security labels.

13
A B1 rating refers to labeled security, where each object has a classification label and each
subject has a security clearance level. To access the contents of the object, the subject should
have an equal or higher level of security clearance than the object. A system compares the
security clearance level of a subject with the object's classification to allow or deny access to the
object. The B1 category offers process isolation, the use of device labels, the use of design
specification and verification, and mandatory access controls. B1 systems are used to handle
classified information.
A B2 rating refers to structured protection. A stringent authentication procedure should be used
in B2-rated systems to enable a subject to access objects by using the trusted path without any
backdoors. This level is the lowest level to implement trusted facility management; levels B3
and A1 implement it also. Additional requirements of a B2 rating include the separation of
operator and administrator duties, sensitivity labels, and covert storage channel analysis. A B2
system is used in environments that contain highly sensitive information. Therefore, a B2
system should be resistant to penetration attempts.
A B3 rating refers to security domains. B3 systems should be able to perform a trusted
recovery. A system evaluated against a B3 rating should have the role of the security
administrator fully defined. A B3 system should provide the monitoring and auditing functionality.
A B3 system is used in environments that contain highly sensitive information and should be
resistant to penetration attempts. Another feature of B3 rating is covert timing channel analysis.

 Discretionary protection based on discretionary access of subjects, objects, individuals, and


groups.
A C1 rating refers to discretionary security protection. To enable the rating process, subjects
and objects should be separated from the auditing facility by using a clear identification and
authentication process. A C1 rating system is suitable for environments in which users process
the information at the same sensitivity level. A C1 rating system is appropriate for environments
with low security concerns.
A C2 rating refers to controlled access protection. The authentication and auditing functionality
in systems should be enabled for the rating process to occur. A system with a C2 rating
provides resource protection and does not allow object reuse. Object reuse implies that an
object should not have remnant data that can be used by a subject later. A C2 system provides
granular access control and establishes a level of accountability when subjects access objects.
A system with C2 rating is suitable for a commercial environment.

 Minimal protection rating that is offered to systems that fail to meet the evaluation criteria
A higher rating implies a higher degree of trust and assurance. For example, a B2 rating
provides more assurance than a C2 rating. A higher rating includes the requirements of a lower
rating. For example, a B2 rating includes the features and specifications of a C2 rating.

Secure Electronic Transaction (SET) uses digital signatures and digital certificates to conduct and verify
an electronic transaction.

SET uses Data Encryption Standard (DES) to encrypt online transactions. It does not use 3DES for
symmetric key exchange.

14
SET works at the Application layer and not at the Network layer of the Open System Interconnections
(OSI) model.

SET does not automatically transmit a user's credit card information to a CA as soon as an online
purchase is made. Digital certificates and digital signatures from the user, the bank, and the merchant are
involved in the transaction.

SET is an open protocol standard, proposed by Visa and MasterCard to transmit credit card information
over the Internet, that uses cryptography to preserve the secrecy of the electronic transactions. The
following entities are involved in a SET transaction:

 The issuer namely, the bank or the financial institution that provides a credit card to the individual
(issuer)
 The authorized individual who uses the credit card
 The merchant who provides the goods to the card holder
 The acquirer, namely the bank or the financial institution that processes payment cards

The credit card holder, the merchant, and the issuer bank ensure the confidentiality and privacy of a SET
transaction through the use of digital certificates and digital signatures. The following steps briefly
describe SET functioning:

1. The issuer provides the electronic wallet software that stores the credit card information for online
transactions. The electronic wallet generates the public and private keys.
2. The merchants receive a digital certificate along with two public keys, one for the bank and the
other for the merchant.
3. The merchant's certificate validates itself to the user during an online transaction.
4. The payment order information, which is specific to the user's order, is encrypted by using the
merchant's public key. The payment details are encrypted by using the bank's public key.
5. The merchant verifies the digital signature on the digital certificate that is used by the individual
for transaction.
6. The order message that travels from the merchant to the bank includes the bank's public key, the
customer's payment information, and the merchant's digital certificate.
7. After receiving a digitally signed verification from the bank, the merchant fills the order for the
customer.

In short, the online transaction involving SET is facilitated through two pair of
asymmetric keys and two digital certificates. SET would involve use of two digital
certificates for the gateway and two for the acquirers.

The four categories of computer crime are as follows:

 computer-assisted crime - This category of crime is one in which a computer is used as a tool to
carry out a crime.
 computer-targeted crime - This category of crime is one in which a computer is the victim of the
crime.

15
 computer-incidental crime - This category of crime is one in which a computer is involved in the
crime incidentally. The computer is not the target of the crime and is not the main tool used to
carry out the crime.
 computer-prevalence crime - This category of crime is one that results because computers are so
prevalent in today's world. Examples include violating commercial software copyrights and
software piracy.

The organizational risk assessment is an input to the Define the ISCM strategy step. It is also an
input to the Establish the ISCM program step. NIST SP 800-137 guides the development of
information security continuous monitoring (ISCM) for federal information systems and
organizations. It defines the following steps to establish, implement, and maintain ISCM:
1. Define an ISCM strategy.
2. Establish an ISCM program.
3. Implement an ISCM program.
4. Analyze data, and report findings.
5. Respond to findings.
6. Review and update the ISCM strategy and program.

 Defining an ISCM strategy involves determining your organization's official ISCM strategy.
Establishing an ISCM program determines the metrics, monitoring, and assessment
frequencies in addition to the ISCM architecture. Analyzing the data collected and reporting
findings determines any issues and implements the appropriate response. Responding to
the findings involves implementing new controls that address any findings you have.
Reviewing and updating the monitoring program involves ensuring that the program is still
relevant and allows you to make any necessary changes to the program.

A penetration test should include the following steps:


1. Discovery - Obtain the footprint and information about the target.
2. Enumeration - Perform ports scans and resource identification.
3. Vulnerability mapping - Identify vulnerabilities in systems and resources.
4. Exploitation - Attempt to gain unauthorized access by exploiting the vulnerabilities.
5. Report - Report the results to management with suggested countermeasures

The following steps are required to classify information:


1. Specify the classification criteria.
2. Classify the data.
3. Specify the controls.
4. Publicize awareness of the classification controls.

The following situations are appropriate external distribution of classified information:


 Compliance with a court order
 Upon senior-level approval after a confidentiality agreement
 Contract procurement agreements for a government project

There are two data classification systems: commercial and military. Businesses usually care more about
data integrity and availability, whereas the military is more concerned with confidentiality.

The types of commercial data classification are as follows:

 Confidential: Data classified as confidential is meant for use within the organization, irrespective of
whether it is commercial or military. This is the only common category between the commercial and

16
military classification systems. Confidential information requires authorization for each access and is
available to those employees in the organization whose work relates to the subject. Confidential data
is exempted from the Freedom of Information Act (FOIA). Examples include trade secrets,
programming codes, or health care information.
 Private: Private information is personal to the employees of a company so it must be protected as
well. An example is the salary of employees.
 Sensitive: Sensitive information requires special protection from unauthorized modification or
deletion. In other words, integrity and confidentiality need to be ensured. Examples include financial
information, profits, or project details.
 Public: Disclosure of public information would not cause any problem to the company. An example is
new project announcements.

The types of military data classification are as follows: top-secret, secret, confidential, sensitive but
unclassified, and unclassified.

Techniques used during software development to track progress

 Gantt charts are bar charts that represent the progress of tasks and activities over a period of time.
Gantt charts depict the timing and the interdependencies between the tasks. Gantt charts are
considered a project management tool to represent the scheduling of tasks and activities of a project,
the different phases of the project, and their respective progress. Gantt charts serve as an industry
standard.
 A PERT chart is a project management model invented by the United States Department of Defense.
PERT is a method used for analyzing the tasks involved in completing a given project and the time
required to complete each task. PERT can also be used to determine the minimum time required to
complete the total project.
 Unit testing refers to the process in which the software code is debugged by a developer before it is
submitted to the quality assurance team for further testing.
 The Delphi technique is used to ensure that each member in a group decision-making process
provides an honest opinion on the subject matter in question. Group members are asked to provide
their opinion on a piece of paper in confidence. All these papers are collected, and a final decision is
taken based on the majority. Delphi technique is generally used either during the risk assessment
process or to estimate the cost of a software development project.
 A prototype is a model or a blueprint of the product and is developed according to the requirements of
customers. There is no process known as the Prototype Evaluation Review Technique charts.
 Cost-estimating techniques include the Delphi technique, expert judgment, and function points.

The Capability Maturity Model (CMM) describes the principles, procedures, and practices that should be
followed by an organization in a software development life cycle. The capability maturity model defines
guidelines and best practices to implement a standardized approach for developing applications and
software programs. It is based on the premise that the quality of a software product is a direct function of
the quality of its associated software development and maintenance processes. This model allows a
software development team to follow standard and controlled procedures, ensuring better quality and
reducing the effort and expense of a software development life cycle. The CMM builds a framework for
the analysis of gaps and enables a software development organization to constantly improve their
processes. A software process is a set of activities, methods, and practices that are used to develop and
maintain software and associated products. Software process capability is a means of predicting the
outcome of the next software project conducted by an organization. Based on the level of formalization of
the life cycle process, the five maturity levels defined by the CMM are as follows:

17
 Initial: The development procedures are not organized, and the quality of the product is not assured
at this level.
 Repeatable: The development process involves formal management control, proper change control,
and quality assurance implemented while developing applications.
 Defined: Formal procedures for software development are defined and implemented at this level. This
category also provides the ability to improve the process.
 Managed: This procedure involves gathering data and performing an analysis. Formal procedures are
established, and a qualitative analysis is conducted to analyze gaps by using the metrics at this level.
 Optimized: The organization implements process improvement plans and lays out procedures and
budgets.

Other software development models are as follows:

 The Cleanroom model follows well-defined formal procedures for development and testing of
software. The Cleanroom model calls for strict testing procedures and is often used for critical
applications that should be certified.
 The Waterfall model is based on proper reviews and the documenting of reviews at each phase of the
software development cycle. This model divides the software development cycle into phases. Proper
review and documentation must be completed before moving on to the next phase.

The Spiral model is based on analyzing the risk, building prototypes, and simulating the application tasks
during the various phases of development cycle. The Spiral model is typically a metamodel that
incorporates a number of software development models. For example, the basic concept of the Spiral
model is based on the Waterfall model. The Spiral model depicts a spiral that incorporates various
phases of software development. In the Spiral model, the radial dimension represents cumulative cost.

SDLC

 The project initiation phase of the system development life cycle (SDLC) involves consideration of
security requirements, such as encryption. Security requirements are considered a part of software
risk analysis during the project initiation phase of the SDLC. The SDLC identifies the relevant threats
and vulnerabilities based on the environment in which the product will perform data processing, the
sensitivity of the data required, and the countermeasures that should be a part of the product. It is
important that the SDLC methodology be adequate to meet the requirements of the business and the
users.

 The system development phase of an application development life cycle includes coding and scripting
of software applications. The system development stage ensures that the program instructions are
written according to the defined security and functionality requirements of the product. The
programmers build security mechanisms, such as audit trails and access control, into the software
according to the predefined security assessments and the requirements of the application.

 The system design specification phase focuses on providing details on which kind of security
mechanism will be a part of the software product. The system design specification phase also
includes conducting a detailed design review and developing a plan for validation, verification, and
testing. The organization developing the application will review the product specifications together
with the customer to ensure that the security requirements are clearly stated and understood and that
the functionality features are embedded in the product as discussed earlier. The involvement of
security analysts at this phase ensures maximum benefit to the organization. This also enables you to
understand the security requirements and features of the product and to report existing loopholes.

18
 The implementation stage of an application development life cycle involves use of an application on
production systems in the organization. Implementation implies use of the software in the company to
meet business requirements. This is the stage where software can be analyzed to see if it meets the
business requirements. Implementation stage also involves certification and accreditation process.
Certification and accreditation are the processes implemented during the implementation of the
product. Certification is the process of technically evaluating and reviewing a product to ensure that it
meets the security requirements. Accreditation is a process that involves a formal acceptance of the
product and its responsibility by the management. In the National Information Assurance Certification
and Accreditation Process (NIACAP), accreditation evaluates an application or system that is
distributed to a number of different locations. NIACAP establishes the minimum national standards for
certifying and accrediting national security systems. The four phases of NIACAP include definition,
verification, validation, and post accreditation. The three types of NIACAP accreditation are site, type,
and system.

 The operations and maintenance phase of a SDLC identifies and addresses problems related to
providing support to the customer after the implementation of the product, patching up vulnerabilities
and resolving bugs, and authenticating users and processes to ensure appropriate access control
decisions. The operations and maintenance phase of software development lifecycle involves use of
an operations manual, which includes the method of operation of the application and the steps
required for maintenance. The maintenance phase controls consist of request control, change control,
and release control.

 Disposal of software is the final stage of a software development life cycle. Disposal implies that the
software would no longer be used for business requirements due to availability of an upgraded
version or release of a new application that meets the business requirements more efficiently through
new features and services. It is important that critical applications be disposed of in a secure manner
to maintain data confidentiality, integrity, and availability for continuous business operations.

The simplistic model of software life cycle development assumes that each step can be completed
and finalized without any effect from the later stages that might require rework. In a system life cycle,
information security controls should be part of the feasibility phase.

Low coupling describes a module's ability to perform its job without using other modules.

High coupling would imply that a module must interact with other modules to perform its job.

Cohesion reflects the different types of tasks that a module carries out. High cohesion means a
module is easier to update and does not affect other modules. Low cohesion means a module carries
out many tasks, making it harder to maintain and reuse.

Software Development Security


The forward-chaining technique is an expert system processing technique that uses if-then-else rules
to obtain more data than is currently available. An expert system consists of a knowledge base and
adaptive algorithms that are used to solve complex problems and to provide flexibility in decision-
making approaches. An expert system exhibits reasoning similar to that of humans knowledgeable in
a particular field to solve a problem in that field.

19
The Spiral model is a software model that is based is on analyzing the risk and building the
prototypes and the simulation during the various phases of the development cycle.

The Waterfall model is a software model that is based on proper reviews and on documenting the
reviews at each phase of the software development cycle. This model divides the software
development cycle into phases. Proper review and documentation must be completed before moving
on to the next phase. The modified Waterfall model was reinterpreted to have phases end at project
milestones. Incremental development is a refinement to the basic Waterfall Model that states that
software should be developed in increments of functional capability.

Backward chaining works backwards by analyzing the list of the goals identified and verifying the
availability of data to reach a conclusion on any goal. Backward chaining starts with the goals and
looks for the data that justifies the goal by applying if-then-else rules.

Expert systems or decision support systems use artificial intelligence to extract new information from
a set of information. An expert system operates in two modes: forward chaining and backward
chaining. Backward chaining is the process of beginning with a possible solution and using the
knowledge in the knowledge base to justify the solution based on the raw input data. Forward
chaining is the reasoning approach that can be used when there are a small number of solutions
relative to the number of inputs. The input data is used to reason forward to prove that one of the
possible solutions in a small solution set is the correct one.

Knowledge-based system (KBS) or expert systems include the knowledge base, inference engine,
and interface between the user and the system. A knowledge engineer and domain expert develops a
KBS or expert system. Expert systems are used to automate security log review to detect intrusion.

A fuzzy expert system is an expert system that uses fuzzy membership functions and rules, instead of
Boolean logic, to reason about data. Thus, fuzzy variables can have an approximate range of values
instead of the binary True or False used in conventional expert systems. An example of this is an
expert system that has rules of the form "If w is low and x is high then y is intermediate," where w and
x are input variables and y is the output variable.

In a distributed computing environment, an agent is a program that performs services in one


environment on behalf of a principal in another environment.

A globally unique identifier (GUID) and a universal unique identifier (UUID) uniquely identify users,
resources, and components within a Distributed Component Object Model (DCOM) or Distributed
Computer Environment (DCE) environment, respectively.

Simple Object Access Protocol (SOAP) is an XML-based protocol that encodes messages in a Web
service setup.

Object request brokers (ORBs) are the middleware that establishes the relationship between objects
in a client/server environment. A standard that uses ORB to implement exchanges among objects in
a heterogeneous, distributed environment is Common Object Request Broker Architecture (CORBA).
A distributed object model that has similarities to CORBA is DCOM.

20
The Object Request Architecture (ORA) is a high-level framework for a distributed environment. It
consists of ORBs, object services, application objects, and common facilities.

The following are characteristics of a distributed data processing (DDP) approach:


It consists of multiple processing locations that can provide alternatives for computing in the event
that a site becomes inoperative.
Distances from a user to a processing resource are transparent to the user.
Data stored at multiple, geographically separate locations is easily available to the user.

Cryptographic application programming interface (CAPI) is an application programming interface that


provides encryption.

 Online transaction processing (OLTP) is used in this environment. OLTP is a transactional


technique used when a fault-tolerant, clustered database exists. OLTP balances transactional
requests and distributes them among the different servers based on transaction load. OLTP uses
a two-phase commit to ensure that all the databases in the cluster contain the same data.

 Object Linking and Embedding Database (OLE DB) is a method of linking data from different
databases together. Open Database Connectivity (ODBC) is an application programming
interface (API) that can be configured to allow any application to query databases. Data
warehousing is a technique whereby data from several databases is combined into a large
database for retrieval and analysis.

Preventative access control A preventative access control is deployed to stop unwanted or unauthorized
activity from occurring. Examples of preventative access controls include fences, locks, biometrics,
mantraps, lighting, alarm systems, separation of duties, job rotation, data classification, penetration
testing, access control methods, encryption, auditing, presence of security cameras or closed circuit
television (CCTV), smart cards, callback, security policies, security awareness training, and antivirus
software.

Deterrent access control A deterrent access control is deployed to discourage the violation of security
policies. A deterrent control picks up where prevention leaves off. The deterrent doesn't stop with trying to
prevent an action; instead, it ges further to exact consequences in the event of an attempted or
successful violation. Examples of deterrent access controls include locks, fences, security badges,
security guards, mantraps, security cameras, trespass or intrusion alarms, separation of duties, work task
procedures, awareness training, encryption, auditing, and firewalls.

Detective access control A detective access control is deployed to discover unwanted or unauthorized
activity. Often detective controls are after-the-fact controls rather than real-time controls. Examples of
detective access controls include security guards, guard dogs, motion detectors, recording and reviewing
of events seen by security cameras or CCTV, job rotation, mandatory vacations, audit trails, intrusion
detection systems, violation reports, honey pots, supervision and reviews of users, incident investigations,
and intrusion detection systems.

Corrective access control A corrective access control is deployed to restore systems to normal after an
unwanted or unauthorized activity has occurred. Usually corrective controls have only a minimal capability
to respond to access violations. Examples of corrective access controls include intrusion detection
systems, antivirus solutions, alarms, mantraps, business continuity planning, and security policies.

Recovery access control A recovery access control is deployed to repair or restore resources, functions,
and capabilities after a violation of security policies. Recovery controls have more advanced or complex

21
capability to respond to access violations than a corrective access control. For example, a recovery
access control can repair damage as well as stop further damage. Examples of recovery access controls
include backups and restores, fault tolerant drive systems, server clustering, antivirus software, and
database shadowing.

Compensation access control a compensation access control is deployed to provide various options to
other existing controls to aid in the enforcement and support of a security policy. Examples of
compensation access controls include security policy, personnel supervision, monitoring, and work task
procedures.

Compensation controls can also be considered to be controls used in place of or instead of more
desirable or damaging controls. For example, if a guard dog cannot be used because of the proximity of a
residential area, a motion detector with a spotlight and a barking sound playback device can be used.

Directive access control A directive access control is deployed to direct, confine, or control the actions of
subject to force or encourage compliance with security policies. Examples of Directive access controls
include security guards, guard dogs, security policy, posted notifications, escape route exit signs,
monitoring, supervising, work task procedures, and awareness training.

Access controls can be further categorized by how they are implemented. In this case, the categories are
administrative, logical/technical, or physical.

Administrative access controls Administrative access controls are the policies and procedures defined by
an organizations security policy to implement and enforce overall access control. Administrative access
controls focus on two areas: personnel and business practices (e.g., people and policies). Examples of
administrative access controls include policies, procedures, hiring practices, background checks, data
classification, security training, vacation history, reviews, work supervision, personnel controls, and
testing.

Logical/technical access controls Logical access controls and technical access controls are the hardware
or software mechanisms used to manage access to resources and systems and provide protection for
those resources and systems. Examples of logical or technical access controls include encryption, smart
cards, passwords, biometrics, constrained interfaces, access control lists (ACLs), protocols, firewalls,
routers, intrusion detection systems, and clipping levels.

Physical access controls Physical access controls are the physical barriers deployed to prevent direct
contact with systems or portions of a facility. Examples of physical access controls include guards,
fences, motion detectors, locked doors, sealed windows, lights, cable protections, laptop locks, swipe
cards, guard dogs, video cameras, mantraps, and alarms.

22
23
24
Risk Anaylsis:

25
HIDS and NIDS can be one of the following:

26
27
28
29
30
31
32
33
34
35
36
37
38
Chapter 6 Physical and Environmental Security

39
40
Chapter 9 Business Continuity

The business continuity plan should be maintained for several reasons including:
 Infrastructure changes
 Environment changes
 Organizational changes
 Hardware, software, and application changes
 Personnel changes

The steps in the business continuity planning process are as follows:


 Develop the business continuity planning policy statement.
 Conduct the business impact analysis (BIA).
 Identify preventative controls.
 Develop the recovery strategies.
 Develop the contingency plans.

41
 Test the plan, and train the users.
 Maintain the plan.

42
43
44
45
Chapter 11 Application Security

46
47
DOS and Smurf attacks:

Fraggle attacks:

SYN attacks:

48
Teardrop:

Distributed Denial of Service (DDoS):

49
50
51
52

You might also like