You are on page 1of 96

UNIT I

UNIT I
1. What do you mean by quality of information? Ans: During the past few centuries, great advances have been made in the human capability to record, store, and reproduce information, beginning with the invention of printing from movable type in 1450, followed by the development of photography and telephony, and culminating in the mass production of electronic digital computers in the later half of the 20th century. New technologies for preserving and transmitting aural and visual information have further enhanced information processing. All these advances have made information a new basic resource, ranking alongside material and energy resources in importance. There are, in fact, people who believe that control of information stores and processing facilities will become more important than natural resources as a source of social and economic power. In todays scenario, there is no lack of data. Actually people are suffering from data overload. Although we are indeed swamped by printout pollution, memo mania, misinformation and information overload; most of us still lack quality information. By quality information we mean information that is accurate, timely and Relevant. Accuracy, Timeliness and relevancy are the three key attributes of information. Accuracy : Accuracy means more than one plus one equals two. It means that the information is free from mistakes and errors. It means that information is clear and accurately reflects the meaning of data on which it is based. It conveys an accurate
1

picture to the recipient and may require a graphical presentation rather than a table full of numbers. Accuracy means that information is free from bias. Manipulated or distorted information is worse than no information. Timeliness: Timeliness means that getting the information to the recipients within the needed time frame. Timeliness means that the recipients can get the information when they need it. Relevancy: Relevancy means the use of a piece of information for a particular person. It is only on very rare occasions that information answers specifically for the recipient what, why, where, when, who and how? 2. What is Information Processing? Explain Ans: Information processing is the acquisition, storage, organization, retrieval, display, and dissemination of information. Few decades back, when the computers were not invented, data processing was done manually. But manual processing was good only when the amount of data to be processed was less. As companies grew, the amount of data to be processed increased thus increasing the number of information processing staff. Earlier in this country, various means of mechanization were introduced and to make efficient use of them, the information processing work was split up into batches. For example, several hundred transactions may have been grouped into a batch; one function would be carried out on all transactions as a batch and then the next function and so on. When punched-card accounting was introduced, it became economical to have very large batches. Many trays of cards could be fed through one machine before the set-up of the machine was changed for the next function. Similarly, with the use of magnetic tape on computers, large tape files would be processed with one program before the file was

Question Bank

UNIT I

stored and made ready for the next operation. In working this way the flexibility of the old manual method was lost. Using a database is like having a superman who is incredibly fast and brilliant and keeps data for many applications. He organizes his data so that minimum writing is necessary so that he can search through the database very quickly to answer queries that may come along. People often need information, which spans departments. For example, one may need to know the personnel implications of marketing decisions. In a system where each department has its own batch processing operations, the computer is of little value in answering such questions. However with a database approach, searching, collecting, correlating and collating data becomes easier. The structure of data that is stored is agreed upon centrally so that interdepartmental usage is possible. 3. Information is the most important asset of an enterprise discuss. Ans: An enterprise is the group of people with a common goal, which has certain resources at its disposal to achieve this goal. The enterprise acts as a single entity. - Mondy, Sharplil, and Premeaux In the traditional approach, the organization is divided into different units based on the functions they perform. Each of the departments is compartmentalized and has its own goals and objectives, which from their point of view is in line with the organizations objectives. Each of these departments functions in isolation and has its own system of data collection and analysis. So the information generate by the various departments, in most cases are available only to the top management and not to the other departments. The result is that, instead of taking the organization towards the common goal the various departments end up in pulling it in different directions. This is because one department does not know what the other does.

But in the enterprise way the entire organization is considered as a system and all the departments are its subsystems. The information about all the aspects of the organization are stored centrally and are made available to all departments. This transparency and information access ensures that the departments will no longer be working in isolation pursuing their own independent goals. Each subsystem knows what others are doing, why they are doing it and what should be done to move the company towards the common goal. Thus the information integration creates an environment that is conducive for the smooth and seamless flow of information across departmental barriers, automating business processes and functions and thus helping the organization to work and move forward as a single entity. 4. Discuss about the Integrated Management Information. Ans: According to kroenke, an information system is an open, purposive system that produces information using the input process output cycle. The minimal information system consists of three elements people, procedures and data. People follow procedures to manipulate data to produce information. Today an information system is an organized combination of people, hardware, software, communications networks and data resources. According to Cats Baril and Thompson, a Modern Information system is an integrated computer-user system for providing undistorted information to support the operations, management and decision-making functions of an organization. Management Information Systems, also called information-reporting systems, were the original type of management support systems, and they still are a major category of information systems. MIS produce information products that support many of the day-to-day decision-making needs of the management. Reports, charts, graphs, displays

Question Bank

UNIT I

and responses produced by such systems provide information that managers have specified in advance. Such predefined information satisfies the needs of managers at the operational levels of the organization who are faced with the structured type of decision-making. But the problem with these information systems is that they operate at a departmental level and they give only information that has been predefined. So each department will have its own database and information systems. These systems will produce various reports of varying detail specified when the systems were built. 5. What are the operations performed on Files? Explain Ans: There are mainly two kinds of file operations retrieval and update operations. Retrieval operations do not change the contents of the file; it only locates records in the file matching certain criteria. Update operations on the other hand, change the file by modifying the records, deleting the records and inserting new records. In both update and retrieval operations one or more records have to be located for retrieval, modification or deletion based on a selection condition or search criteria. The selected records will be the records that satisfy the search criteria. There may be one or many records that satisfy the selection condition. The actual operations for locating, modifying, deleting and inserting records will vary from system to system, but there are several representative operations that are used in most systems. Some of them are given below: Find (Locate) The goal of this operation is to locate the record or records that satisfy the search criteria. The block that contains the records is transferred to the main memory and the records are searched. The first record that matches the search criteria is located and the search continues for other records until the end of the file is reached. Read Read is sometimes referred to as Get. In this operation, the contents of the records are copied from the

memory to a program variable or work area. This command in some cases advances the pointer to next record in the result set. Modify Also known as Update. This command modifies the filed values of the current record and then writes the modified record back to the disk. Insert Inserts a new record to the file. The insertion operation involves many processes like finding a place to insert, writing the new record to the disk, updating the index entries, updating the record headers and so on depending on the file type and the insertion mechanism. Delete Deletes the current record and updates the file on the disk to reflect the deletion. 6. Discuss about Different types of file organizations Ans: Sequential File Organization: In a sequential file, records are stored one after another in an ascending or descending order determined by the key field of the records. In payroll example, the records of the employee file may be organized sequentially by employee code sequence. Sequentially organized files that are processed by computer systems are normally stored on storage media such as magnetic tape, punched paper tape, punched cards, or magnetic disks. To access these records the computer must read the file in sequence from the beginning. The first record is read and processed first, then the second record in the file sequence, and so on. To located record in sequence and compare its key field to the one that is needed. The retrieval search ends only when the desired key matches with the key field of the currently read record. On an average, about half the file has to be searched to retrieve the desired record from a sequential file. Direct File Organization: In a direct file, unlike a sequential form of organization, the data may be organized in such way that they are scattered throughout the disk in what may appear to be a random order. But it is this form of organization that supports direct access, in which records can

Question Bank

UNIT I

be accessed nearly instantaneously and in any order. Once accessed, a record can be read or updated, and when this process is completed, the system is free to respond to another request. When using direct access, an application such as an on-line transaction-processing system for inventory control can be designed so that centralized data are not only instantly accessible but are always up-to-date. Processing data using direct access is referred to as direct file processing. Indexed File Organization : Another common procedure for locating a record in a file is for the system to store records randomly throughout the disk, but to provide one or more indexes to locate a particular record. A primary index associates a primary key with the physical location in which a record is stored. When a user requests a record, the disk operating system first loads the primary index into the computers CPU and then searches the index sequentially for the key. When it finds the entry for the key, it then reads the address in which the record is stored. The disk system then proceeds to this address and reads the record contents. Even through using an index is a two-step process, it is certainly more efficient than a sequential search in which the user would start looking from the beginning of the records and continue until finding the record that is required. The advantage of primary index is that it contains only two pieces of information. So, the search of a small index and then direct access to the record is much faster than a sequential search of the data file itself. 7. Discuss about Storage Media? Ans: The choice of the storage media and file organization are very closely related. The media on which the data is going to be stored should be decided after considering the file organization that is going to be used and the nature of the application for which the data will be used. Magnetic Tape : Magnetic tape can be used as a primary storage medium and for back-up purposes. Several

factors determine which media should be used for secondary storage. Tape as primary storage : It is less expensive to store a given volume of data on tape than it is to store it on disk. A disk system can store only a limited amount of data on-line. But there is no limit to the size of a tape library. Tape storage is frequently used to store historical data no longer needed online. Tape As Backup : Because the data stored on a disk are represented only by magnetized spots, it is possible for them to be lost. Therefore, tape is often used to store a backup copy of the data. There are two types of backup tape drives. They are a) Streamers and b) start-stop system. Streamers rapidly make a mirror image of a disk. They copy an entire disk and cannot copy just a single file. On the other hand, start-stop tape drives can copy one or a few disk files. But when used to store the contents of the entire disk, start-stop drives take longer than streamers do. If only a few files need to be routinely backed up, a start-stop system will be more efficient. But if the entire disk must be backed up once or twice a day, then streamers will be the better choice. Magnetic Disk : Magnetic disk, as we have seen provides nearly instantaneous access to records. Even though disks can be costly to install and maintain and the data stored on them can sometimes be lost, this is usually overshadowed by their speed. As a result, disks dominate most modern information processing systems where the competitive environment demands accessibility to data in the process of producing and delivering goods and services to a firms customers. 8. Discuss about Record types? Ans: A collection of field names and their corresponding data types constitutes a record type definition. The data type associated with each field specifies the type of values that a field can take. The common data types include, character numeric, Boolean, date, time and so on. Files could be made

Question Bank

UNIT I

10

of records, all of which are of the same length fixed length records or they can contain records, which are of different sizes called variable length records. Fixed Length Records: Consider the following BOOK record format definition : 01 BOOK-RECORD. 05 ISBN PIC X(10). 05 TITLE PIC X(60). 05 AUTHOR-NAME PIC X(30). The above record contains 3 fields and has a total length of 100 (10+60+30) bytes. So, the first record will be stored in the first 100 bytes, the second will be stored in the next 100 bytes and so on. But there are some drawbacks for this arrangement. It is difficult to delete a record from this structure. The space occupied by the record that is deleted must be filled with some other record or there must be a way to mark the record as deleted. If the space is left unused or if the record is not removed, but marked as deleted, the space will be wasted. Variable Length Records: Files could also be made of records, which are of different sizes. These records are called variable length records. A file may contain variable length records in any of the following situations: Records having variable length fields Records having repeating fields Records having optional fields File containing records of different record types. 9. Discuss about Problems with File System Data Management? (OR) What are the Disadvantages of Traditional File Processing System. Explain Ans: In the early age of computers there were no database systems to store and manage users data. The file processing

system method of storing and managing data on computer media was an improvement against the manual system. The approach of file processing system data management was used vastly before the invention of database approach. The traditional file processing system data management contains the following limitations. Structural and Data Dependence: The application programs that access to data file is dependent on its structure. Whenever any change is made to a data file, then all the application programs that access the data file must also be refined. The changes in the characteristics of data such as changing data type of a field, changing the memory capacity of a field cause changes in all the programs that access the file. Field Definitions and Naming Conventions: While designing a data file, it is important to design every field name of a record. These field names describe the characteristics of actual data values. For example the field name cust_id describes the value of a Customer Number. Ex : Emp_ID : Employee ID Cust_City : Customer City Stud_Course : Student Course Data Redundancy: The file system data management system forces the storage of same basic data in different locations. As application programs are often developed independently, there is more chance of duplicated data. This unnecessarily repeated data is called Data Redundancy. The data stored in different locations need to be updated consistently. Otherwise it leads to the problem of data inconsistency. Data Inconsistency: Data inconsistency occurs when redundant data that is stored with different, conflicting versions of same data in different places. If we reduce the no.of repetitions of a data value it means we are developing data consistency.

Question Bank

11

UNIT I

12

Excessive Program Maintenance: The above discussed factors create heavy program maintenance. Structural data dependency leads to frequent changes in application programs as every data change needs change in application program. Limited Data Sharing: As in traditional File processing approach, each application has its own private files users have little opportunity to share the data outside of their own applications. 10. Explain the role of DBMS in Database Approach Ans: Databases are used to store, manipulate and retrieve data in every type of organization including Business, Healthcare, Education, Government, Libraries etc. Database technology is routinely used by individuals on personal computers, by workgroups accessing database on network servers and employees using Enterprise-wide distributed applications. Many organizations today are using/building separate database called Data Warehouses for decision-making that supports managers. Databases are fundamental to most information systems. Database is a repository of data. We can define a database as a well-organized collection of logically related data. A database may be of any size and complexity. As database is a well organized, the data are structured so as to be easily stored, manipulated and retrieved by users. As related, the data describes the domain of interest to a group of users and that the users can use the data to answer the questions concerning that domain. Databases are used to store objects such as documents, photographs, images, audio, video, textual data etc. The Database Management System is a set of programs that manages the database structure and controls access to the data stored in the database. Database Management System serves as an interface between the user/application program and the actual database.

The structure of the database is ultimately stored as a collection of files. The data is accessed from those files through the commercial software DBMS. The DBMS receives all requests that are made by the End-users and processes those requests to fulfill the requests requirements. While processing the requests, DBMS hides all the complexities behind that from the End-users/application programs. This transparency encourages the users to use DBMS widely. The commercial software DBMS can interact with application programs. These application programs might be written by using programming languages like Java, Visual Basic, C++, C etc. 11. Explain the advantages of Database Approach Ans: Database is a shared, integrated computer structure that stores collection of user data and metadata. User data includes raw facts that could be recorded on computer media.

Question Bank

13

Metadata includes data about data through which user data is integrated and managed. The database approach provides several advantages when compared to file processing systems. Improved Data Sharing: The database approach provides End-users to have better access to more data and better-managed data. This access makes possible for end users to respond quickly for changes in their environment. Improved Data Security: When multiple users have the access to data, securing data is more critical. To provide security for data, we spend considerable amount of time, effort and money. DBMS provides better facilities for data privacy and security policies. Better Data Integration: As DBMS provides the advantage of data sharing, users can have wider access to well-managed data that provides integrated view of the organizations data operations. Minimized Data Inconsistency: Data Inconsistency exists when different copies of the same data appear in different places. As we eliminate/reduce the redundancy of data, we improve the consistency of data. The probability of data inconsistency is reduced in a properly designed database. Improved Data Access: DBMS makes it possible to produce quick answers to the queries posed by end users. Improved Decision Making: Better managed data and improved data access make it possible to generate better quality of information through which decisions are made. Increased End-User Productivity: The data combined with the tools that transform data into usable information gives advantage to the End-users to make quick, informed decision that can make the difference between success and failure. Program-Data Independence: The separation of data descriptions (Metadata) from the application programs is called Data Independence. Data descriptions are stored in a central location called Repository. This results the

UNIT I

14

Question Bank

15

flexibility to change the data without changing the application programs that process the data. Minimal data redundancy: Data files are integrated into a single, logical structure. Each primary fact is recorded in only one place in the database. Database approach allows the designer to carefully control the type and amount of redundancy. Improved data quality: The database approach provides number of tools to improve data quality. Database designers can specify integrity constraints. A constraint is a rule that cannot be violated by database users. Reduced Program Maintenance: The data that is stored in the database need not be changed frequently. As in a database approach, data is more independent of the application programs we can accommodate these changes very easily. 12. Explain Different ranges of database Applications? Ans: The range of database applications can be divided into five categories : 1) Personal Database, 2) Workgroup Databases, 3) Department Databases, 4) Enterprise Databases, and 5) Internet, Interanet and Extranet Databases. The following table describes the various types of database applications. The table shows the database type, number of users, database architecture and the range of database size. Architecture Type of Number Size of Database of Users Database Personal 1 Megabytes Destop/Laptop Computer,PDA Workgroup 5-25 Client/Server(two-tier) MegabytesGigabytes Department 25-100 Client/Server(Three-tier) Gigabytes Enterprise >100 Client/Server(Distributed Gigabytes Terabytes or Parallel Server) Internet >1000 Megabytes Web Server and Application Server Gigabytes

Personal Database : Personal Databases are mainly designed to support only one user. Personal databases have long resided on personal computers, including laptops, Recently the introduction of personal digital assistants has incorporated personal databases into handled devices that not only function as computing devices but also as cellular phones, fax senders, and web browsers. Personal databases are widely used because they can often improve personal productivity. However, they entail a risk: the data cannot easily be shared with other users. Workgroup Databases : A workgroup is a relatively small team of people who collaborate on the same project or application or on a group of similar projects or applications. These persons might be engaged with a construction project or with developing a new computer application. A workgroup database is designed to support the collaborative efforts of such a team. Typically one or more persons work on a given object or component at a given time. The group needs a database that will track each item as it is developed and allow the data to be easily shared by the team members.
Developer 1

Developer 1

Project Manager

Librarian

LOCAL AREA NETWORK

Database Server

Workgroup Database

The method of sharing the data in this database is that each member of the workgroup has a desktop computer and the computers are linked by means of a local area network. The database is stored on a central device called the database server, which is also connected to the network. Thus each

UNIT I

16

Question Bank

17

member of the workgroup has access to the shared data. The following figure shows the method of sharing the data in the workgroup database. Department Databases : Department Databases are designed to support the various functions and activities of a department. A department is a functional unit with in an organization. Typical examples of departments are personal, marketing, manufacturing, and accounting. A department is generally larger than a workgroup (typically between 25 and 100 persons) and is responsible for a more diverse rage of functions. Enterprise Database : An enterprise database is one whose scope is the entire organization or at least, many different departments. Such databases are intended to support organization-wide operations and decision making. There are two kinds of enterprise database are available. They are 1. Enterprise resource planning (ERP) systems 2. Data warehousing Implementations. 1. Enterprise Resource Planning (ERP) Systems : A business management system that integrates all functions of the enterprise, such as manufacturing, sales, finance, marketing, inventory, accounting, and human resources. ERP systems are software applications that provide the data necessary for the enterprise to examine and manage in activities. 2. Data Warehouse : An integrated decision support database whose content is derived from the various operational databases. Internet, Intranet and Extranet Database : Internet Databases are designed to support the worldwide network that connects users of multiple platforms easily. A internet database supports more then 1000 user throughout the globe. Intranet means, Use of Internet protocols to establish access to company data and information that is limited to the organization. Whereas the Extranet means, Use of Internet

protocols to establish limited access to company data and information by the companys customers and suppliers.

13. Explain Various components of Database Environment? Ans: The major components of a typical database environment and their relationships are as shown below. 1. Computer-aided software Engineering (CASE) Tools : These are the automated tools used to design database and application programs. 2. Repository : This is a centralized knowledge for all data definitions, data relation ships, screens, and report formats, and other system components. A repository contains an extended set of metadata important for managing databases as well as other components of an information system. 3. Database Management System (DBMS) : DBMS is a commercial software system used to define, create, maintain and provide controlled access to the database. 4. Database : Database means an organized collection of logically related data, usually designed to meet the information needs of multiple users in an organization. It is important to distinguish between the database and the repository. The repository contains definitions of data, whereas the database contains occurrences of data. 5. Application Programs : Application programs are used to create and maintain the database and provide information to users. 6. User Interface : Languages, menus, and other facilities by which users interact with various system components, such as CASE tools, application programs, the DBMS, and the repository. 7. Data Administrators : Persons who are responsible for the overall information resources of an organization. Data administrators use CASE tools to improve the productivity of database planning and design.

UNIT I

18

Question Bank

19

Data Administrators

System Developers

End Users

CASE Tools

User Interface

Application Programs

Repository

DBMS

Database

8. System developers : Persons such as systems analysts and programmers who design new application programs. System developers often use CASE tools for system requirements analysis and program design. 9. End Users : Persons throughout the organization who add, delete, and modify data in the database and who request or receive information form it. All user interactions with the database must be routed through the DBMS. 14. Write about Software Development Lifecycle (SDLC)? Ans: There are certain phases that the software goes through before, during, and after the development process. The various phases or steps in the Software Development Life Cycle (SDLC) are : Project Startup Requirements analysis System Analysis Systems Design Coding & Unit Testing

System Testing Acceptance Testing Project Wind-up Project Maintenance Project Startup :In this phase the project team is formed and the project leader identified. The project is organized modules identified. The senor members of the project team will sit together and prepare the project plan so as to ensure completion of the project within the cost, time and resource constraints based on the details available. The main tasks in this phase are : Studying the project proposal and contract document, estimation work papers and other documents available. Obtaining clarification on matters such as scope, contractual obligations and client participation in the project, if required. Defining the operational process for the project. Deciding on the format and standards for documenting the project plan. Documenting the project plan as per the structure and format decided upon. Requirements analysis:The output of this phase is the documentation of the existing system, and the Requirements Definition Document (RDD). The Requirements analysis and the preparation of RDD will be usually done by the system analysts in collaboration with the users. System Analysis:During this phase the project plan and the RDD are refined and updated based on the project progress and changes in the scope of the project. The output of the systems analysis phase is the prototype, the SAD, the usability plan, the updated project plan and the RDD. System Design :-

UNIT I

20

Question Bank

21

All documents produced in this phase are reviewed and if changes are required for any of the items produced during the earlier phases, then those items are updated. The major tasks in this phase is writing the specification for each program in the system. Writing program specification is essential for projects involving development in procedural languages. For each program and reusable routine identified in the system, the program logic is determined, the structure chart is prepared, the inputs, outputs, error messages and help messages are finalized and the program specification is prepared. Coding & Unit Testing:The output of this phase is the unit-tested programs containing the source code, the test results, the associated documentation etc. System Testing :This is the phase of the software development lifecycle where the system testing is carried out. The system test is done using the STP, STS and system test data. Many companies do alpha and/or beta testing also. Alpha testing is done when the system or product has a lot of new previously untested features. Where as the Beta testing is required when the development team decides that some level of customer evaluation is needed prior to the final release of the product. Once the project is successfully tested audits are performed to ensure that final product is complete and satisfies the specifications. Acceptance Testing:This phase is carried out only if the system is developed for a particular client. In this phase the project team prepares for the acceptance test by ensuring the availability and completeness of all work items needed for acceptance test and loading the acceptance test data. The project team will assist the client in acceptance testing, recording the errors found and fixing them. Project Wind-up:-

In this phase the project wind-up activities are completed. All the resources acquired for the project are released. The main activities in this phase are: Carry out Project-end appraisals Release project team members, hardware and software resources. Return client-supplied products, if any Ensure availability of project documentation copies in library Project Maintenance:Once the system is developed and tested it is released to the users. From this point onwards the maintenance phase starts. Once people start using the system, many errors that escaped the testing will be found. The users might ask for new features and enhancements. It is the responsibility of the maintenance team to attend to these requests and to fix the bugs that are found. If the project has not followed any standards and does not have any documentation, then the job of maintaining the system can turn into one of the most difficult assignments that software professionals can have. 15. Write about Database Development under SDLC? OR Explain briefly Database Development Life Cycle? Ans: There are certain steps that the database related activities and grouped into phases that form part of the database development life cycle (DDLC). The different phases of DDLC are: Requirements analysis Database Design Evaluation and Selection Logical Database Design Physical Database Design Implementation Data loading Testing and performance Tuning Operation

UNIT I

22

Question Bank

23

Maintenance Requirements analysis : The first step in implementing a database system is to find out what is required. What kinds of a database is needed for the organization, what is the volume of the data that must be handled on a day-to-day basis, how much data is to be stored in the master files, and so on. The main goals of this phase of DDLC are : Study the existing system Define problems and constraints of the database environment Define the design objectives Define standards and guidelines. Database Design : In this phase the database designers will decide on the database will decide on the database model that is ideally suited for the organizations needs. The database designers will study the documents prepared by the analysts in the requirements analysis phase and then will go about developing a system that satisfies the requirements. First a conceptual design of the database is created. In the conceptual design stage, data modeling is used to create an abstract database structure that represents the real-world scenario. At this stage the hardware or the database model that is to be used are not decided the conceptual design is hardware and software independent. Evaluation and Selection : Once the data model is created, tested and verified, the next step is to evaluate the different database management systems and select the one that is ideally suited for the needs of the organization. Here the very important thing to be remembered is that the end-users representatives should be made part of the group that evaluates and selects the database system for the organization. The main factors that influence the selection of the DBMS are: Cost of the System Features and Tools

Customer Support and Training Underlying Data Model Portability Hardware Requirements Logical Database Design : Once the different database management systems are evaluated and the one best suited for the organization is selected, the next step in the DDLC is the logical database design. Logical design is dependent on the choice of the database model that is used. In the logical design stage, the conceptual design is translated into internal model for the selected DBMS. This includes mapping all objects in the model to the specific constructs used by the selected database software. For example, for a RDBMS, the logical design includes the design of tables, indexes, views, transactions, access privileges, etc. Physical Database Design : Physical database design is the process of selecting the data storage and data access characteristics of the database. The storage characteristics depends on the type of devices supported by the hardware, the type of data access methods supported by the system and the DBMS. Physical database translates the logical design into hardware dependent one. Implementation : In most databases a new database implementation requires the creation of special storage related constructs to house the end user tables. These constructs usually include storage group, tablespaces, data files, tables and so on. Data loading : After creating the database, the data must be loaded into the database. If the data to be loaded into the database is currently stored in a different system, then the data needs to be converted and then migrated to the new database. Data conversion and migration tools and utilities are available with almost all database management systems in the marketplace.

UNIT I

24

Testing and performance Tuning : Once the data is loaded into the database the database is tested and fine-tuned for performance, integrity, concurrent access and security constraints. The testing and performance tuning occurs in parallel with the testing and performance tuning of the application programs. Operation : During the operational phase, the database is accessed by the users and application programs, new data is added, the existing data is modified and some obsolete data is deleted. The database administrators perform the administrative tasks like performance tuning, storage space creation, access control, database back up and so on. It is during the operational phase that the database delivers its usefulness as a critical tool in management decision-making and help in the smooth and efficient functioning of the organization. Maintenance : Once the database is released into production, it will not remain as it was designed. New business requirements, need for new information, acquisition of new data and similar factors will make it necessary to make modifications and enhancements to the existing design. So the database administrators will definitely receive requests for more storage space, changes in the database design, addition of tables, addition of new users, removal of users who have left the organization, changes in the access privileges of the users and so on. The main tasks in this phase are: Database backup and recovery Performance tuning Database design modifications Database access management Database Audits Usage monitoring Hardware maintenance DBMS Software upgradation

UNIT II
1. Explain Data Modeling Rules of the Organization? Ans: Business rules are derived from polices, procedures, events, functions, and other business objects, and state constraints on the organization. Business rules are important in data modeling because they govern how data are handled and stored. Overview of Business Rules : A Statement that defines or constrains some aspect of the business. It is intended to assert business structure or to control or influence the behavior of the business. For example A student may register for a section of a course only if he or she has successfully completed the prerequisites for that course. A preferred customer qualifies for a 10% discount, unless he has an overdue account balance Most organizations today are guided by thousands of combinations of such rules. In the aggregate these rules influence behavior and determine how the organization responds to its environment. The Business Rules Paradigm : The concept of business rules has been used in information systems for some time. However, it has been more common to use the related term integrity constraint when referring to such rules. Scope of Business Rule : Business rules that impact only an organizations databases. Most organizations have a host of rules and/or polices that fall outside this definition. Some business rules cannot be represented in common data modeling notation.
25

UNIT II

26

Question Bank

27

Good Business Rules : The following are the characteristics of a good business rules : Rule Description Declarative A business rule is a statement of policy. the rule does not describe a process or implementation, but describes what a process validates. Precise With the related organization, the rule must have only one interpretation among all interested people, and its meaning must be clear. Atomic A business rule marks one statement, not several; no part of the rule can stand on its own as a rule. Consistent A business rule must be internally consistent and must be consistent with other rules. Expressible A business rule must be able to be stated in natural language, but it will be stated in a structured natural language so that there is no misinterpretation. Distinct Business rules are not redundant, but a business rule may refer to other rules. Business A business rule is stated in terms business Oriented people can understand, and since it is a statement of business policy Gathering Business Rules : Business rules appear in descriptions of business functions, events, policies, units, stakeholders , and other objects. These descriptions can be found in interview notes from individual and group information systems requirements collection sessions, organizational documents and other sources. Rules are identified by asking questions about the who, what when, where, why and how of the organization. Data Names and Definitions : Fundamental thing to understanding and modeling data are naming and defining data objects must be named and

defined before they can be used unambiguously in a model of organizational data. Data Name : A data name is a name given for data objects like entities, relationships, attributes etc. The following are general guidelines about naming any data objects. i) Related to business, not technical characteristics. ii) Data name should be Meaningful iii) They should be Unique iv They should be Readable v) They should be taken from the approved list of words. vi) They should be repeatable in the sense they should be consistent. Data Definition : A definition is an explanation of a term or a fact A term is a word or phrase that has a specific meaning for the business. A fact is an association between two or more terms. 2. Write about E-R Model? Explain the symbols used in E-R Diagrams? Ans: An E-R model is a detailed logical representation of the data for an organization. The E-R model is expressed in terms of entities in the business environment, the relationships among those entities, and the attributes of both the entities and their relationships. An E-R model is normally expressed as an Entity-Relationship diagram, which is a graphical representation of an E-R Model. E-R modeling uses the following symbols: The Overall logical structure of a database can be expressed graphically by an E-R Diagram. Name Rectangle Symbol Meaning The Rectangle Symbol is used to Representing Strong Entity.

UNIT II

28

Question Bank

29

Double Rectangle

The Double Rectangle Symbol is used to Representing Weak Entity. The oval shape symbol is used to represent attributes of relation in E-R Diagrams. The Double lined oval shape symbol is used to represent multi-valued attributes of a relation in ER Diagrams. The Dashed oval shape symbol is used to represent Derived attributes of a relation in E-R Diagrams. The Diamond shape symbol is used to Represents relationships among entity sets. The Line is used to links attributes to entity sets and entity sets to relationships.

Oval

Double lined Oval

Each entity is distinguishable from the other entities. Categories of different entities include: Persons : Employee, customer, students, supplier etc. Objects : Book, Machine, Vehicle etc. Events : Sales, Reservation, Registration etc. Attributes : Each entity can have a number of characteristics. The characteristics of an entity are called attributes. For example, an entity, say employee, can have characteristics like employee_code, ename, address, phone etc. An attribute is a descriptive property or characteristics of an entity. Some attributes can be logically grouped into super attributes called compound attributes. For example, an employees name is a compound attribute consisting of First_Name, Middle_Name, and Last Name. A compound attribute is the one that actually consists of all other attributes.

Dashed Oval

Diamond

Relationships : An association of several entities in


an Entity-Relationship model is called Relationship. When the relationship such as, married to, parent of, child of member of and works for are added, then we know that we are talking about a group of related people, a family or a corporation. Three types of relationships exist among entities. These are : Unary Relationship Binary Relationship Ternary Relationship Key attributes : The key attribute is an attribute that uniquely identifies an entity set. For example, Employee_code can be the key attribute for the entity set Employee, because it uniquely identifies each employee entity.

Line

3. Explain Different Components of an E-R Model? Ans: The E-R model consists of the following major components are: Entity Attributes Relationships Key attributes Entity : An entity is a class of persons, places, objects, events or concepts about which we need to collect and store data.

UNIT II

30

Question Bank

31

4. What is Relationship? Explain different types of Relationships Ans: An association of several entities in an EntityRelationship model is called Relationship. When the relationship such as, married to, parent of, child of member of and works for are added, then we know that we are talking about a group of related people, a family or a corporation. Three types of relationships exist among entities. These are : Unary Relationship Binary Relationship Ternary Relationship

The first (one-to-one) indicates that an employee is assigned one parking place, and each parking place is assigned to one employee. The second (one-to-many) indicates that a product line may contain several products, and each product belongs to only one product line. The third (many-to-many) shows that a student may register for more than one course, and that each course may have many students registrants. Ternary Relationship : A ternary Relationship is a simultaneous relationship among the instances of three entity types. A typical business situation that leads to a ternary relationship is show in flowing Figure. Fig. Ternary Relationship

Fig. Unary Relationship Unary Relationship : A unary relationship is a relationship between the instances of a single entity type. The following figure shows the Unary Relation ship. Binary Relationship : A binary Relationship is a relationship between the instances of two entity types and is the most common type of relationship encountered in data modeling. The following three figures shows the binary relationship : In this example, vendors can supply various parts to werehouses. The relationship supplies is used to record the specific parts that are supplied by a given vendor to a particular warehouse. Thuse there are three entity type : VENDOR, PART, and WAREHOUSE. There are two attributes on the relationship supplies : Shipping_Mode and Unit_Cost. 5. What is Cardinality Constraints? Explain. Ans: Suppose there are two entity types, A and B, that are connected by a relationship. A cardinality Constraint specifies the number of instances of entity B that can be associated with each instance of entity A. For example, consider a video store that rents videotapes of moves. Since the store may Fig. Binary Relationship

UNIT II

32

Question Bank

33

stock more than one videotape for each movie, this is intuitively a one-to-many relationship. Minimum Cardinality : The minimum cardinality of a relationship is the minimum number of instances of entity B that may be associated with each instance of entity A. In the video example, the minimum number of videotapes for a movie is zero. When the minimum number of participants is zero, we say that entity type B is an optional participant in the relationship. Maximum Cardinality : The maximum cardinality of a relationship is the maximum number of instances of entity B that may be associated with each instance of entity. In the Video Example, the maximum cardinality for the VIDEOTYPE entity type is many that is, an unspecified number greater than one.

The most important new modeling construct incorporated in the EER model is supertype/subtype relationships. This facility allows us to model a general entity type called the supertype and then subdivide it into several specialized entity types called subtypes. For example, the entity type CAR can be modeled as a supertype, with subtypes SEDAN, SPORTS CAR, COUPE and so on. Each subtype, inherits attributes from its supertype and in addition may have special attributes of its own. Adding new notation for modeling supertype/subtype relationships has greatly improved the flexibility of the basic E-R model. Enhanced E-R diagrams are used to capture important business rules such as constraints in supertype/subtype relationships. However, most organizations use a multitude of business rules to guide behavior. Many of these rules cannot be expressed with basic E-R diagrams, or even with enhanced E-R diagrams. Supertypes and Subtypes : An Entity type is a collection of entities that share common properties or characteristics. While the entity instances that compose an entity type are similar, we do not expect them to be identical. One of the major challenges in data modeling is to recognize and clearly represent entities that are almost the same, that is, entity types that share common properties but also have one or more distinct properties that are of interest to the organization. For this reason, the E-R model has been extended to include supertype/subtype relationships. A subtype is a subgrouping of the entities in an entity type that is meaningful to the organization. For example, STUDENT is an entity type in a university. Two subtypes of STUDENT are UNDERGRADUATE STUDENT and GRADUATE STUDENT. In this example, we refer to STUDENT as the sypertype. A supertype is a generic entity type that has a relationship with one or more subtypes.

Fig. Relationship with cardinality constraints. A relationship is, of course, bi-directional, so there is also cardinality notation next to the MOVIE entity. Notice that the minimum and maximum are both one. This is called a mandatory one cardinality. In other words, each videotape of a movie must be a copy of exactly one movie. In general, participation in a relationship may be optional or mandatory for the entities involved. If the minimum cardinality is zero, participation is optional; if the minimum cardinality is 1, participation is mandatory. 6. What is EER? Explain Supertypes & Subtypes? Ans: The term Enhanced Entity-Relationship (EER) model is used to identify the model that has resulted from extending the original E-R model with new modeling constructs.

UNIT II

34

Question Bank

35

The basic notation that we use for supertype / subtype relationships are shown Fig. The supertype is connected with a line to a circle, which in turn is connected by a line to each subtype that has been defined. The U-Shaped symbol on each line connecting a subtype to the circle indicates that the subtype is a subset of the supertype. It also indicates the direction of the subtype / supertype relationship.

7. Explain the process of Generalization and Specialization and Supertypes and subtypes Hierarchs? Ans: The Developing real world data models, there are two processes specialization and generalization that serves as mental models in developing sypertype / subtype relationships. Generalization : A unique aspect of human intelligence is the ability and propensity to classify objects and experiences and to generalize their properties. In data modeling, generalization is the process of defining a more general entity type from a set of more specialized entity types. Thus generalization is a bottom-up process.

Fig. Basic notation for supertype / subtype relationships Attributes that are shared by all entities are associated with the supertype. Attributes that are unique to a particular subtype are associated with that subtype. Other components will be added to this notation to provide additional meaning in supertype / subtype relationships.

Fig. Three entity types : CAR, TRUCK and MOTORCYCLE A example of generalization is shown in Fig. Three entity types have been defined : CAR, TRUCK, and MOTORCYCLE. At this stage, the data modeler intends to represent these separately on an E-R diagram. However, on closer examination we see that the tree entity types have a number of attributes in common : Vehicle_ID, Vehicle_name, Price, and Engine_Number. This fact suggests that each of the tree entity types is really a version of a more general entity type.

UNIT II

36

Question Bank

37

This more general entity type together with the resulting supertype / subtype relationships is shown in below fig. The entity CAR has the specific attribute No_of_Passengers, while TRUCK has two specific attributes : Capacity and Cab_Type. Thus, generalization has allowed us to group entity types along with their common attributes and at the same time preserve specific attributes that are peculiar to each subtype.

Fig. Entity type PART In discussions with users, we discover that there are two possible sources for parts : Some are manufactured internally, while others are purchased from outside suppliers. Further, we discover that some parts are obtained from both sources. In this case the choice depends on factors such as manufacturing capacity, unit price of the parts and so on. Some of the attributes in above Fig. Apply to all parts, regardless of source. However, others depend on the source. Thus Routing_Number applies only to manufactured parts, while supplier_ID and Unit_Price apply only to purchased parts. These factors suggest that PART should be specialized by defining the subtypes MANUFACTURED PART and PURCHASED PART. In the following Fig. Routing_Number is associated with MANUFACTURED PART. The data modeler initially planned to associate Supplier_ID and Unit_Price with PURCHASED PART. However, in further discussion with users she suggested instead that they create a new relationship between PURCHASED PART and SUPPLIER. This relationship allows users to more easily associate purchased parts with their suppliers. Notice that the attribute Unit_Price is now associated with the relationship Supplies, so that the unit price for a part may vary from one supplier to another.

Fig. Generalization to VECHICLE supertype Specialization : Specialization is the process of defining one or more subtypes of the supertype and forming supertype / subtype relationships. Each subtype is formed based on some distinguishing characteristic such as attributes or relationships specific to the subtype. An example of specialization is shown in following fig. Show an entity type named PART, together with several of its attributes. The identifier is Part_No and other attributes include Description, Unit_Price, Location, Qty_on_Hand, Routing_Number, and suppler.

UNIT II

38

Question Bank

39

Fig. Specialization to MANUFACTURED PART and PURCHASED PART Specialization and generalization are both valuable techniques for developing sypertype / subtype relationships. Which technique you use at a particular time depends on several factors such as the nature of the problem domain. Supertype / Subtype Hierarchies : A supertype / Subtype Hierarchy is a hierarchical arrangement of sypertypes and subtypes, where each subtype has only one supertype. 8. What is normalization? Explain different normal forms. Ans: Normalization is a formal process for deciding which attributes should be grouped together in a relation. Normalization is primarily a tool to validate and improve a logical design, so that it satisfies certain constraints that avoid unnecessary duplication of data. Normalization can be accomplished and understood in stages, each of which corresponds to a normal form. A normal form is a state of a relation that results from applying simple rules regarding functional dependencies to that relation. There are several kinds of normal forms, they are

1. First Normal Form (1NF) 2. Second Normal Form (2NF) 3. Third Normal Form (3NF) 4. Boyce/Codd Normal Form (BCNF) 5. Fourth Normal Form (4NF) 6. Fifth Normal Form (5NF) 1. First Normal Form (1NF): A relation is in first normal form(1NF) if it contains no multivalued attributes. Recall that the first property of a relation is that the value at the intersection of each row and column must be atomic. Thus a table that contains multivalued attributes or repeating groups is not a relation. 2. Second Normal Form (2NF) : A relation is in second normal form if it is in first normal form and every nonkey attribute is fully functionally dependent on the primary key. Thus no nonkey attribute is funcationally dependent on part of the primary key. A relation that is in first normal form will be in second normal form if any one of the following conditions applies : a. The Primary key consists of only one attribute b. No nonkey attributes exist in the relation. c. Every nonkey attribute is functionally dependent on the full set of primary key attributes. 3. Third Normal Form (3NF) : A relation is in third normal form (3NF) if it is in second normal form and no transitive dependencies exist. A transitive dependency in a relation is a functional dependency between two nonkey attributes. 4. Boyce/Codd Normal Form (BCNF) : A relation is in Boyce-Codd Normal Form if and only if every determinant in the relation is a candidate key. 5. Fourth Normal Form (4NF) : A relation is in fourth normal form (4NF) if it is in BCNF and contains no multivalued dependencies. 6. Fifth Normal Form (5NF) : Fifth Normal form deals with a property called lossless joins.

UNIT II

40

Question Bank
1001 1001 1005 1005 1010 1002 1002 PULLAMMA .K PULLAMMA .K PULLAIAH .M PULLAIAH .M YALLAIAH .J YALLAMMA .Y YALLAMMA .Y MARKETING MARKETING ACCOUNTING ACCOUNTING PERSONAL SALES SALES 20,000 20,000 30,000 30,000 25,000 15,000 15,000 JAVA C++ C TALLY 6.3 MS-OFFICE C C++

41
08/5/2004 08/08/2004 15/02/2004 25/05/2004 22/03/2004 15/02/2004 15/07/2004

9. Write about First Normal Form (1NF)? Ans: A relation is in first normal form(1NF) if it contains no multivalued attributes. That is, the value of the intersection of each row & column must be atomic. Thus a table that contains multivalued attributes o repeating groups is not a relation. The only attribute values permitted by 1NF are single or atomic or scalar values. 1NF disallows having a set of values. IN otherwords, 1NF disallows relations with in relations or relations as attributes of tuples. 1NF is independent of keys and functional dependencies. A domain is atomic if and only if all elements of the domain are considered to be individual units. We say that a relation schema R is in 1NF, if the domains of all attributes of R are atomic. Composite attributes such as address(DNO, Street, City) are non-atomic. Relation : EMP
EMPNO 1001 1005 1010 1002 ENAME DNAME SALARY 20,000 30,000 25,000 15,000 PULLAMMA .K MARKETING PULLAIAH .M ACCOUNTING DATE COMPLETED DBMS 10/02/2004 JAVA 08/5/2004 C++ 08/08/2004 C 15/02/2004 TALLY 6.3 25/05/2004 MS-OFFICE 22/03/2004 C 15/02/2004 C++ 15/07/2004 COURSE

Fig. Table out repeating groups is in 1NF because it contains only atomic values.

10. Discuss about Second Normal Form (2NF)? Ans: A relation is in second normal form(2NF) if it is in First Normal Form and every nonkey attribute is fully functionally dependent on the primary key. Thus, no non key attribute is functionally dependent on part of the primary key. A relation that is in First Normal Form will be in Second Normal Form if any one of the following conditions applies: 1) The Primary key consists of only one attribute (such as the attribute EMPNO in EMP) 2) No nonkey attributes exist in the relation. thus all of the attributes in the relation are components of the primary key. 3) Every nonkey attribute is functionally dependent on the full set of primary key attributes. A table is in Second Normal Form (2NF) if every non-key attribute depends on the entire key (not just part of it). This issue arises only for composite keys. Relation : EMP
EMPNO 1001 1001 1001 1005 1005 1010 1002 1002 ENAME PULLAMMA .K PULLAMMA .K PULLAMMA .K PULLAIAH .M PULLAIAH .M YALLAIAH .J YALLAMMA .Y YALLAMMA .Y DNAME MARKETING MARKETING MARKETING ACCOUNTING ACCOUNTING PERSONAL SALES SALES SALARY 20,000 20,000 20,000 30,000 30,000 25,000 15,000 15,000 DATE COMPLETED DBMS 10/02/2004 JAVA 08/5/2004 C++ 08/08/2004 C 15/02/2004 TALLY 6.3 25/05/2004 MS-OFFICE 22/03/2004 C 15/02/2004 C++ 15/07/2004 COURSE

YALLAIAH .J PERSONAL YALLAMMA .Y SALES

Fig. Table with multivalued (Repeating groups) attributes When a table has no repeating groups, it is said to be in first normal from (1NF). That is, for each cell in a table, there can be only one value. This value should be atomic in the sense that it cannot be decomposed into smaller pieces. Relation : EMP
EMPNO 1001 ENAME PULLAMMA .K DNAME MARKETING SALARY 20,000 COURSE DBMS DATE COMPLETED 10/02/2004

UNIT II

42

Question Bank

43

In the above fig. EMP is not in 2NF. The primary key for EMP is the composite primary key (EMPNO, COURSE). Therefore the Nonkey attributes ENAME, DNAME and SALARY are functionally dependent on part of the primary key, namely, EMPNO but not on COURSE. These dependencies are shown graphically in the following figure.
EMPNO COURSE ENAME DNAME SALARY DATE COMPLETED

CUST_ID NAME SALESPERSON REGION 1001 Rama Smith South 1002 Sudha Miller West 1003 Suma Smith South 1004 Ramana Scott East 1005 Radha Miller West 1006 KSR Adams North Fig. Sales relation with sample data
CUST_ID NAME SALESPERSON REGION

A Partial functional dependency is a functional dependency in which one or more nonkey attributes (such as ENAME) are functionally dependant on part of the primary key. The partial functional dependency in EMP creates redundancy in that relation, which results in anomalies when the table is updated. To convert a relation to second normal form, we decompose the relation into new relations that satisfy one or more of the conditions described above. The relation EMP is decomposed into the following two relations: EMP(EMPNO, ENAME, DNAME, SALARY) and EMP_COURSE(EMPNO,COURSE, DATE_COMPLETED) Out of 3 conditions for 2NF the EMP table satisfies condition1, hence it is in second normal form. Whereas the EMP_COURSE relation satisfies condition3 and hence it is in Second Normal Form. 11. Write about Third Normal Form (3NF)? Ans: A relation is in third normal form (3NF) if it is in second normal form and no transitive dependencies exist. A transitive dependency in a relation is a functional dependency between two or more nonkey attributes. For example, consider the relation. SALES(CUST_ID, NAME, SALESPERSON, REGION)

Fig. Transitive dependency in SALES relation The functional dependencies in the SALES relation are graphically shown above. CUST_ID is the primary key, so that all of the remaining attributes are functionally dependent on this attribute. However, there is a transitive dependency (Region is functionally dependent on salesperson and salesperson is functionally dependent on CUST_ID). As a result, there are update anomalies in SALES. 1. Insertion anomaly : A new salesperson (BANI) assigned to the North region cannot be entered until a customer has been assigned to that salesperson, since a value for CUST_ID must be provided to insert a row in the table. 2. Deletion anomaly : If customer number 1004 is deleted from the table, we lose the information that sales person SCOTT is assigned to the East region. 3. Modification Anomaly : If salesperson SMITH is reassigned to the EAST region several rows must be changed to reflect that fact two rows.
SALES1
CUST_ID 1001 1002 1003 1004 1005 1006 NAME Rama Sudha Suma Ramana Radha KSR SALESPERSON Smith Miller Smith Scott Miller Adams SPERSON SALESPERSON REGION

Smith Miller Scott Adams

South West East North

UNIT II

44

Question Bank

45

SPERSON
SALESPERSON REGION

Sample data for this relation are shown in above fig and the functional dependencies are show as follows:
SID SUBJECT LECTURER SUB_GPA SALESPERSON

SALES1
CUST_ID NAME

Fig. Removing a transitive dependency These anomalies arise as a result of the transitive dependency. The transitive dependency can be removed by decomposing SALES into two relations, as shown above Note that Salesperson, which is the determinant in the transitive dependency in SALES, becomes the primary key in SPERSON. Salesperson becomes a foreign key in SALES1. Often the normalized relations of above fig. Will also use less storage space. Because the dependent data Region and any other salesperson data do not have to repeat for each customer. Thus, normalization, besides solving anomalies, also often reduces redundant data storage. The new relations are now in third normal form, since no transitive dependencies exist. You should verify that the anomalies that exist in SALES are not present in SALES1 and SPERSON. 12. Write about Boyce-code Normal Form (BCNF)? Ans: When a relation has more than one candidate key, anomalies may result even though that relation is in 3NF. For example, consider the STUDENT_LECTURER relation in 3NF, but not BCNF, shown in the following fig. SID SUBJECT LECTURER SUB_GPA 101 Physics Adams 4.0 101 Music Allen 3.3 104 Maths Scott 3.2 107 Music Miller 3.7 108 Physics Adams 3.5 Fig. Relation with Sample Data This relation has the following attributes : SID (Student ID), SUBJECT, LECTURER, and SUB_GPA.

Fig. Functional dependencies in STUDENT_LECTURER As shown in above Fig., the primary key for this relation is the composite key consisting of SID and Subject. Thus the two attributes LECTURER and SUB_GPA are functionally dependent on this key. This reflects the constraint that although a given student may have more than one SUBJECT, for each SUBJECT a student has exactly one LECTURER and one GPA. There is a second functional dependency in this relation : Subject is functionally dependent on LECTURER. That is, each advisor advises in exactly one SUBJECT. Notice that this is not a transitive dependency. In contrast, in this example a key attribute (SUBJECT) is functionally dependent on a nonkey attribute (LECTURER). Anomalies in STUDENT_LECTURER The STUDENT_LECTURER relation is clearly in 3NF, since there are no partial functional dependencies and no transitive dependencies. Nevertheless, because of the functional dependency between SUBJECT and LECTURER there are anomalies in this relation. Consider the following Examples: 1. Insertion Anomaly : Suppose we want to insert a row with the information that Babbage taught in Computer Science. This, of course, cannot be done until at least one student SUBJECT in Computer Science is assigned Babbage as a LECTURER. 2. Update Anomaly : Suppose that in Physics the Lecturer ADAMS is replaced by Einstein. This change must be made in two or more rows in the table. 3. Delete Anomaly : If student number 107 withdraws from school, we lose the information that Miller Teaches in Music.

UNIT II

46

Question Bank

47

Definition of BCNF :The anomalies in STUDENT_LECTURER result from the fact that there is a determinant (LECTURER) that is not a candidate key in the relation. R.F. Boyce and E.F.CODD identified this deficiency and proposed a stronger definition of 3NF that remedies the problem . We say a relation is in Boyce-Codd Normal Form (BCNF) if and only if every determinant in the relation is a candidate key. STUDENT_LECTURER is not in BCNF because although the attribute LECTURER is a determinant, it is not a candidate key, only SUBJECT is functionally dependent on LECTURER. Converting a Relation to BCNF :A relation that is in 3NF can be converted to relations in BCNF using a simple two-step process. This process is shown in the following figures
SID LECTURER SUBJECT SUB_GPA Fig. (a) Revised STUDENT_LECTURER relation (2NF)

SID LECTURER SUB_GPA

LECTURER SUBJECT

Fig. (b) Two Relations in BCNF

restructuring of the original relation because of the functional dependency. The result of applying this rule to STUDENT_LECTURER is shown in Fig. (a). the determinant Lecturer becomes part of the composite primary key. The attribute Subject, which is functionally dependent on Lecturer, becomes a nonkey attribute. If you examine Fig. (a), you will discover that the new relation, the attribute Subject is functionally dependent on Lecturer, which is just one component of the primary key. Thus the new relation is in first normal form. The second step in the conversion process is to decompose the relation to eliminate the partial functional dependency. This results in two relations, as shown in Fig.(b). These relations are in 3NF. In fact the relations are also in BCNF, since there is only one candidate key in each relation. Thus we see that if a relation has only one candidate key, then 3NF and BCNF are equivalent. The two relations namely STUDENT and LECTURER with sample data are shown in Fig. (c). You should verify that these relations are free of the anomalies. Another common situation in which BCNF is violated is when there are two or more overlapping candidate keys of the relation. 13. Distinguish Between BCNF and 3NF? Ans: BCNF 3NF 1. Every BCNF relation is also in 1. Every 3NF relation is not in 3NF. BCNF. 2. We cannot ensure that every 2. We can ensure that every relation relation schema can be schema can be decomposed into a decomposed into a collection of collection of 3NF relations using BCNF relations using only only decompositions that have decompositions that have certain certain desirable properties desirable properties. 3. BCNF ensures that no 3. Unlike BCNF, however, some

SID LECTURER SUB_GPA LECTURER SUBJECT 101 Adams 4.0 Adams Physics 101 Allen 3.3 Allen Music 104 Scott 3.2 Scott Maths 107 Miller 3.7 Miller Music 108 Adams 3.5 Fig. (c) Relations with Sample data In this first step, the relation is modified so that the determinant in the relation that is not a candidate key becomes a component of the primary key of the revised relation. The attribute that is functionally dependent on that determinant becomes a nonkey attribute. This is legitimate

UNIT II

48

Question Bank

49

4. 5. 6.

7.

redundancy can be detected using functional dependency (FD) information. BCNF is more desirable normal form than 3NF. BCNF is more restrictive constraint than 3NF. BCNF requires that all non-trivial dependencies be of the form , where is a candidate key. A relation is said to be in BCNF if and only if each determinate is a candidate keys.

redundancy is possible with 3NF

8.

9.

10. 14. Ans:

4. 3NF is less desirable normal form than BCNF 5. 3NF is less restrictive constraint than BCNF. 6. 3NF allows non-trivial functional dependencies whose left side is not a candidate key. 7. A relation is said to be in 3NF if and only if each nonkey attribute is functionally dependent on other nonkey attributes. the main disadvantage of BCNF 8. The main advantage of 3NF over is that it is not always possible to BCNF is that it is always possible obtain a BCNF database design to obtain a 3NF design without without sacrificing dependency sacrificing lossless-join or preservation. dependency preservation. It is free from redundancy with 9. It is not free from redundancy respect to Functional with respect to Functional Dependencies. Dependencies. It Does not allow any transitive 10. 3NF allow transitive dependencies dependencies. Discuss about Fourth Normal Form (4NF)?
OFFERING COURSE INSTRUCTOR TEXTBOOK Management White Druker Management White Peters Management Green Drucker Management Green Peters Management Black Druker Management Black Peters Finance Gray Jones Finance Gray Chang Fig.1b - Relation in BCNF

OFFERING COURSE INSTRUCTOR Management White Green Black Finance Gray

TEXTBOOK Druker Peters

Jones Chang Fig 1a - Table of courses, instructors and textbooks

Fig 1. Data with Multivalued Dependencies

Fig. 1a. This user view shows for each course the instructors who teach that course, and the textbooks that are used. In this table the following assumptions hold. 1. Each course has a well-defined set of instructors. For example, Management has three instructors. 2. Each course has a well-defined set of textbooks that are used. For example, Finance has two textbooks 3. The textbooks that are used for a given course are independent of the instructor for that course. For example, the same two textbooks are used for Management regardless of which of the three instructors is teaching Management. In fig 1b this table has been converted to a relation by filling in all of the empty cells. This relation named OFFERING is in first normal form. Thus, for each course, all possible combinations of instructor and text appear in OFFERING. Notice that the primary key of this relation consists of all three attributes. Since there are no determinants other than the primary key, the relation is actually in BCNF. Yet it does contain much redundant data that can easily lead to update anomalies. For example, suppose that we want to add a third textbook (author : Middleton) to the Management course. This change would require the addition of three new rows to the relation in Fig. 1b, one for each instructor. Multivalued Dependencies : The type of dependency shown in this example is called a multivalued dependency, which exists when there are at least three attributes (For example, A, B and C) in a relation, and for each value of A there is a well-defined set of values of B and a well-defined set of values of C. However, the set of values of B is independent of set C, and vice versa. To remove the multivalued dependency from a relation, we divide the relation into two new relations. Each of these tables contains two attributes that have a multivalued relationship in the original relation. The following fig2, Shows the result of this decomposition for the OFFERING relation of Fig 1b. Notice that the relation called TEACHER

UNIT II

50

Question Bank

51

contains the Course and Instructor attributes, since for each course there is a well-defined set of instructors. Also, for the same reason TEXT contains the attributes Course and Textbook. However, there is no relation containing the attributes Instructor and Course since these attributes are independent.
TEACHER COURSE INSTRUCTOR Management White Management Green Management Black Finance Gray TEXT COURSE TEXTBOOK Management Druker Management Peters Finance Jones Finance Chang

The concept of Project-join Normal form (PJNF) is an extension of the Fourth Normal form definition when join dependencies are also considered. A practical problem with the use of these generalized constraints is that they are not only hard to reason with, but there is also no set of sound and complete inference rules for reasoning about the constraints. Hence PJNF and domain key normal form are used very rarely. 16. What is Denormaliztion? Explain Ans: In some exceptional cases, database designers use the redundancy to improve performance for specific applications. They select such a schema that has redundant information that means it is not normalized. For example, suppose that the name of an account holder has to be displayed along with the account number and cash balance, every time the account is accessed. In our normalized schema, this requires a join of account with depositor. One alternative is to create a relation containing all the attributes of account and depositor. This makes displaying the account information faster. However, the balance information for an account is repeated for every person who owns the account, and all copies must be updated by the application, whenever the account balance is updated. This process of taking a normalized schema and making it non-normalized is called denormalization. Designers of database use it to tune performance of systems that require time critical operations. A better alternative is to use views. A materialized view is a view whose result is stored in the database, and reflected in the data when the relations used in the view are updated. Like denormalization, using materialized view does have space and time over heads, but it has the advantage that keeping the view up-to-date is the job of database system, not the application programmer.

Fig. (2) Relations in 4NF A relation is in fourth normal form (4NF) if it is in BCNF and contains no multivalued dependencies. You can easily verify that the two relations in above Fig 2 are in 4NF and are free of the anomalies described earlier. Also, you can verify that you can reconstruct the original relation (OFFERING) by joining these two relations. In addition, notice that there are less data in Fig 2 than in Fig 1b. For simplicity, assume that Course, Instructor, and Textbook are all of equal length. Because there are 24 cells of data in Fig 1b and 16 cells of data in Fig 2, there is a space savings of 25 percent for the 4NF tables. 15. Write a short notes on Fifth Normal Form (5NF)? Ans: The Fourth Normal Form is by no means the ultimate Normal form. As we saw earlier, multivalued dependencies help us understand and tackle some forms of repetition of information that cannot be understood in terms of functional dependencies. There are types of constraints called join dependencies that generalize multivalued dependencies, and lead to another normal form called project-join normal form (PJNF). The PJNF is called fifth normal form. There is a class of even more general constraints, which leads to a normal form called domain-key normal form.

Question Bank

53

UNIT III
1. What is SQL? What are the characteristics of SQL? Ans: Structured Query Language (SQL) is the standard command set used to communicate with the relational database management systems. All tasks related to relational data management creating tables, querying the database for information, modifying the data in the database, deleting them, granting access to users, and so on can be done using SQL. Characteristics of SQL :SQL usage by its very nature is extremely flexible. It uses a free form syntax that gives the user the ability to structure SQL statements in a way best suited to him. Each SQL request is parsed by the REBMS before execution, to check for proper syntax and to optimize the request. Unlike certain programming languages, there is no need to start SQL statements in a particular column or be finished in a single line. The same SQL request can be written in a variety of ways. The Origins of SQL are base on the felt need for such a flexible query language. The fact that SQL was developed after the need for it was specifies is evident from the relatively few commands it has. Throughout its life cycle, SQL received natural extension to its functional capabilities, and what was originally intended as a query language has now become the complete database language. 2. What are the advantages of SQL? Ans: SQL Stands for Structured Query Language which is used communicate the RDBMS. SQL offers many advantage some of them are listed below :
52

SQL is a high level language that provides a greater degree of abstraction than procedural languages. It is so fashioned that the programmer can specify what data is needed but need not specify how to retrieve it. SQL enables the end-users and systems personnel to deal with a number of database management systems where it is available. Increased acceptance and availability of SQL are also in its favor. Applications written in SQL can be easily ported across systems. Such porting could be required when the underlying Database Management System needs to be upgraded or changed. SQL as a language is independent of the way it is implemented internally. A query returns the same result regardless of whether optimizing has been done with indexes or not. This is because SQL specifies what is required and not how it should be done. The language while being simple and easy to learn can handle complex situations. The results to be expected are well defined. The language has sound theoretical base and there is no ambiguity about the way a query will interpret the data and produce the result. SQL is not merely a query language. The same language can be used to define data structures, control access to the data, delete, insert and modify occurrences of the data. All SQL operations are performed at a set level. One select statement can retrieve multiple rows, one modify statement can modify multiple rows etc. 3. Discuss the data types and literals supported by SQL? Ans: Data types are a classification of a particular type of information. It is easy for humans to distinguish between different types of data. But computers uses special symbols to keep track of the different types of data it processes.

UNIT III

54

Question Bank

55

SQL Data Types :- Most programming languages require the programmer to declare the data type of every data object, and most database systems require the user to specify the type of each data field. The available data types vary from one programming language to another., and from one database application to another. SQL Supports the following scalar data types : i) chacter (n) ii) Character varying (n) iii) BIT(n) and Bit Varying(n) iv) Numeric(p,q) and Decimal (p,q) v) Integer vi) Smallint vii) float(p) i) Character(n) :- This data type represents a fixed length string of exactly n characters where n is greater than zero and should be an Integer. CHARACTER is an abbreviation for CHARATER(1) and CHAR is an abbreviation for CHARACTER. ii) Character Varying(n) :- This data type represents a varying length string whose maximum length is n Characters. Here n is a positive integer. VARCHAR is an abbreviation for CHARACTER VARYING or CHAR VARYING. iii) BIT(n) and BIT VARYING(n) :- BIT(n) represents a fixed length string of exactly n bits and BIT VARYING(n) represents a varying length string whose maximum length can be n bits. In both cases n must be a positive integer. iv) Number(p,q) and Decimal(p,q) :- This data type represents a decimal number, p digits and sign with assumed decimal point q digits from the right. Both p and q are integers, p should be greater than zero. q can be equal to zero but q should be less than or equal to p. v) Integer :- INTEGER represents a signed integer decimal or binary. INT is an abbreviation for INTEGER.

Whether the INTEGER is decimal or binary is implementation-defined. vi) Smallint :- This data type represents a signed integer decimal or binary. Whether the INTEGER is decimal or binary, is implementation-defined. But it must be the same as that of INTEGER. Also the actual precision of INT and SMALLINT is implementation-defined, but the precision of SMALLINT should not exceed that of INT. vii) Float(p) :- FLOAT(p) represents a floating point number. FLOAT is an abbreviation for FLOAT(p). where p is implementation defined. Real is an alternative term for FLOAT(s), where s is implementation defined. Literals : There are four kinds of literal values supported in Structured Query Language (SQL). They are i) Character Strings ii) Bit Strings iii) Exact numeric iv) Approximate numeric. i) Character String :- Character strings are written as a sequence of characters enclosed in single quotes. ii) Bit String :- A bit string is written either as a sequence of 0s and 1s enclosed in single quotes and preceded by the letter B or as a sequence of hexadecimal digits enclosed in single quotes and preceded by the letter X. iii) Exact Numeric :- These literals are written as a signed or unsigned decimal number possibly with a decimal point. iv) Approximate Numeric :- Approximate numeric literals are written as exact numeric literals followed by the letter E followed by a signed or unsigned integer. 4. Write about SQL Environment in detail. Ans: Figure below is a simplified SQL environment, consistent with SQL:2003 standards. An SQL environment includes an instance of an SQL database management system along with the databases accessible by that DBMS and the users and programs that may use that DBMS to access the

UNIT III

56

Question Bank

57

databases. Each database is contained in a catalog, which describes any object that is a part of the database, regardless of which user created that object. Figure shows two catalogs: DEV_C and PROD_C. Most companies keep at least two versions of any database they are using. The production version, PROD_C here, is the live version, which captures real business data and thus must be very tightly controlled and monitored.

attributes, privileges, constraints, and domains, along with other information relevant to the database. The information contained in the catalog is maintained by the DBMS as a result of the SQL commands issued by the users and does not require conscious action by the user to build it. It is part of the power of the SQL language that the issuance of syntactically simple SQL commands may result in complex data management activities being carried out by the DBMS software. Users can browse the catalog contents by using SQL select statements. 5. Discuss about different types of SQL Commands? Ans: SQL is a very powerful language that benefits all types of users of the RDBMS. SQL provides a comprehensive set of commands for a variety of tasks. The SQL commands are divided into the following categories :i) Data Definition Language (DDL) ii) Data Manipulation Language( DML) iii) Data Query Language (DQL) iv) Data Control Language (DCL) v) Data Administration Statements (DAS) vi) Transaction Control Statements (TCS) i) Data Definition Language (DDL) :- Data Definition Language is used to create, alter and delete database objects. The commands used to CREATE, ALTER and DROP. The principal logical data definition statements are CREATE TABLE, CREATE VIEW, CREATE INDEX, ALTER TABLE, DROP TABLE, FROP VIEW and DROP INDEX. ii) Data Manipulation Language( DML) :- data manipulation Language commands let users insert, modify and delete the data in the database. SQL provides three data manipulation statements INSERT, UPDATE and DELETE. iii) Data Query Language (DQL) :- This is one of the most commonly used SQL statement. This SQL statement enables the users to query one or more tables to get the

The development version, DEV_C here, is used when the database is being built and continues to serve as a development tool where enhancements and maintenance efforts can be thoroughly tested before being applied to the production database. Typically this database is not as tightly controlled or monitored, because it does not contain live business data. Each database will have named schema(s) associated with a catalog. The schema is a collection of related objects, including but not limited to, base tables and views, domains, constraints, character sets, triggers, roles, and so forth. If more than one user has created objects in the database, combining information about all users' schemas will yield information for the entire database. Each catalog must also contain an information schema, which contains descriptions of all schemas in the catalog, tables, views,

UNIT III

58

Question Bank

59

information they want. SQL has only one data query statement SELECT. iv) Data Control Language (DCL) :- The data control language consists of commands that control the user access to the database objects. Thus DCL is mainly related to the security issues, that is, determining who has access to the database objects and what operations they can perform on them. The task of the DCL is to prevent unauthorized access to data. The database Administrator (DBA) has the power to give and take the privileges to a specific user, thus giving or revoking access to the data. The DCL commands are GRANT and REVOKE. v) Data Administration Statements (DAS) :- Data administrator commands allow the user to perform audits and analysis on operations within the database. They are also used to analyze the performance of the system. Two data administration commands are START AUDIT and STOP AUDIT. One thing to be remembered here is that, data administration is totally different from database administration. Database administration is the overall administration of the database and data administration is only a subset of that. vi) Transaction Control Statements (TCS) :Transaction control statements are statements, which manage all the changes made by the DML statements. For example transaction statements commit data. Some of the transaction control statements are COMMIT, ROLLBACK, SAVEPOINT, and SET TRANSACTION. 6. What is operator ? Explain Different types of operators Supported by SQL? Ans: Operator is a symbol, which represent some particular action. Operator operates on operands. Operand may be a variable or constant. There are two types of operators unary and binary. The operator operates on only one operand where as the binary operator operates on two operands.

a) Arithmetic Operators :- Arithmetic operators are used in SQL expressions to add, subtract, multiply, divide and negate data values. The result of this expression is a numeric value. Operator Description +, Unary Operators, Denotes unary +ve or ve *, /, +, - Binary Operators, Multiplication, Division, Addition, Subtraction respectively. b) Comparison Operators :- These operators are used to compare one expression with another. The result of a comparison is True, False or Unknown. The comparison operators are
Operator = != > >= < <= IN(list) NOT IN IS NULL IS NOT NULL LIKE Description Equality Not Equal to Greater Than Greater Than or Equal to Less Than Less Than or Equal to Equal to any member of list Not Equal to any member of list Tests for Nulls Test for anything other than Null Returns true when the first expression matches the pattern of the second expression. Wild cards like (%) allowed and is case sensitive. Compares a value to every value in a list Compares a value to each value in a list True, if sub-query returns at least one row >= x and <=y

ALL ANY, SOME EXISTS BETWEEN x AND y

c) Logical Operators :- A logical operator is used to produce a single result from combining the two separate conditions. The following table shows the logical operators and their definitions.
Operator AND Definition Returns true if both component conditions are true; otherwise

UNIT III OR NOT

60 returns false Returns true if either component conditions is true otherwise returns false. Returns true if the condition is false, otherwise returns false

Question Bank OR UNION INTESECT MINUS

61 True, if either conditions are true Returns all data from both queries Returns only rows that match both queries Returns only the rows that do not match both queries.

d) Set Operators :- Set operators combine the results of two separate queries into a single result. Not all implementations support INTERSECT and MINUS, so check whether your implementation supports these features before using them. The list of set operators and their definitions are listed below :
Operator UNION UNION ALL INTERSECT MINUS Definition Returns all distinct rows from both queries Returns all rows from both queries Returns all rows selected by both queries Returns all distinct rows that are in the first query but not in the second one.

7. Discuss the operator Precedence in SQL? Ans: Precedence defines the order that the DBMS uses when evaluating the different operators in the same expression. Every operator has a pre-defined precedence. The DBMS evaluates operators with the highest precedence first before evaluating the operators of lower precedence. Operators of equal precedence are evaluated from left to right. The order of precedence is give below.
Operator : , () () +, *, / +, || NOT AND Definition Prefix for host variable Variable Separator Surrounds subqueries Surrounds a literal Surrounds a table or column alias or literal text Overrides the normal operator precedence Unary Operators Multiplication and Division Addition and Subtraction Character Concatenation Reverses the result of an expression True, if both conditions are true

8. What is Table? Explain maintenance of tables in SQL? Ans: Tables are the basic building blocks in any relational database management system. They contain the rows and columns of your data. You can create, modify and delete tables using the data definition language (DDL) commands. Creating a Table :- We can create a table using the CREATE TABLE statement. The CREATE TABLE statement creates a new base table. The CREATE TABLE statement has two formats. Format1: CREATE TABLE base-table-name (col1 datatype(width), col2 datatype(width), ----------------------------------------------coln datatype(width)); The second format of the CREATE TABLE statement allows the user to create a base table by using an existing table. Format 2 : CREATE TABLE base-table-name (col1, col2,..,coln) AS SELECT col1,col2,.,coln FROM table-name; Modifying a Table :- An existing base table can be modified by using the ALTER TABLE statement. The format of the ALTER TABLE statement as follows : ALTER TABLE base-table-name ADD column datatype(width) [NULL | NOT NULL]

UNIT III

62

Question Bank

63

In the following example another column DISCOUNT with data type NUMBER will be added to the BOOK table. ALTER TABLE book ADD discount NUMBER(8,2); The ALTER TABLE statement new columns can be added and primary and foreign key specifications can be added or removed. The important thing to remember here is that the ALTER TABLE statement neither support any kind of change to the width or data type of an existing column nor the deletion of an existing column. Deleting a Table :- An existing base table can be deleted at any time by using the DROP TABLE statement. The syntax of this statement is DROP TABLE base-table-name; The specified base table is removed form the system. All indexes and views defined for the table are also automatically dropped. For example, the command DROP TABLE BOOK will delete the table named BOOK along with its contents, indexes and any views defined for that table. 9. Write about Views? What are the advantages of Views? Ans: A view is a named table that is represented, not by its own physically separate stored data, but by its definition in terms of other named tables. In other words, views and base tables are analogous to the telescope and the stars. When users see a view, they see the same data that is in the database tables, but perhaps with a different perspective. And just as the telescope does not contain any stars, views dont contain any data. Instead, view is a virtual table, deriving its data from base tables. Creating a View :- A view is created or defined using the CREATE VIEW statement. The general syntax of a view definition is give below : CREATE VIEW view-name [(col1, col2, .., coln)] AS

Subquery [WITH CHECK OPTION]; The subquery cannot include either UNION or ORDER BY. The clause WITH CHECK OPTION indicates that UPDATE and INSERT operations against the view are to be checked to ensure that the UPDATEd or INSERTed row satisfies the view-defining condition. Dropping a View :- We can delete the existing views by using DROP VIEW statement. The syntax of the DROP VIEW statement is : DROP VIEW view-name Some of the major advantages of using views are listed below: Data Security Views allow to setup different security levels for the same base table. The views allow the same data to be seen by different users in different ways at the same time. View can be used to present additional information like derived columns. Views can be used to hid complex queries. Developers can hide complex queries using a view. The benefit is that users can issue simple queries against the view and the view will take care of all the complicated work. For example, a developer might hide a join query by creating a view and the user who uses the view will not feel any difference. To provide row and column level security To ensure efficient access paths To ensure proper data derivation To mask complexity from the user To provide domain support To Rename Columns To Provide solutions that cannot be accomplished without views.

UNIT III

64

Question Bank

65

10. What is index? Explain the different types of indexes? Ans: An index is a structure that provides faster access to the rows of a table based on the values of one or more columns. The index stores data values and address of the rows where those data values occur. In the index the data values are sorted in the ascending or descending order. So, the RDBMS can quickly search the index to find a particular data value and hence the row associated with it. Creating an Index :- Indexes are created using the CREATE INDEX statement. The general form of the CREATE INDEX statement is: CREATE [UNIQUE] INDEX index-name ON base-table(col1 [order][,col2 [order]].) Each order specification is ASC (ascending) or DESC (descending), ASC being the default. The left to right sequence of naming columns in the CREATE INDEX statement corresponds to manor-to-minor ordering the usual way. Once created, the index is automatically managed by the RDBMS to reflect the updates on the base table, till the index is dropped. The UNIQUE option in the CREATE INDEX statement specifies that two rows in the indexed base table will not allowed to take the same value for the indexed column or columns combination at the same time. Indexes like the base tables can be created and dropped at any time. Any number of indexes can be built on a single base table. Dropping an Index :- Indexes can be dropped explicitly using the DROP INDEX command. Whenever the base table is dropped the indexes for that table are automatically dropped. The syntax of the DROP INDEX command is DROP INDEX index-name. Once the command is executed, the index is destroyed. Types of Indexes: There are different types of indexes. Most systems allow indexes involving more than one

column (composite indexes) and indexes that prevent duplication of data (unique indexes). Another Option is the clustered index where the indexes are stored both logically and physically. i) Composite Indexes :- When an index is made up of more than one column it is called a composite index. Composite indexes are used when tow or more columns are best searched as a unit because of their logical relationship. Composite index columns do not have to be specified in the same order as in the CREATE INDEX statement. You can use any order you want. For better performance, it is a good idea to start with the column that you use most often in searches. ii) Unique Indexes :- A unique index is one in which no two rows are permitted or in which no duplicate values are allowed for the same index value. Unique indexes are usually created on the primary key of a table. You specify the index as a unique index by using the keyword UNIQUE. For unique indexes, the RDBMS checks for duplicate values when the index is created (if data already exists) and each time new data is added. A unique index should be created only if the column on which the index is being built has uniqueness. For example, creating a unique index on the first or last names will not be a good idea as it does not make any sense and will create problem if there is more than one person with the same last name. iii) Clustered Indexes :- Many RDBMS offer you the choice of making your index clustered or non-clustered. When you create a clustered index, it means that the system will sort the rows of a table when there is a change made to the index. Since clustered index controls the physical location of data, there can be only one clustered index per table, most often created on the primary key. In a non-clustered index, the physical order of the rows is not the same as their indexed order. There can be as

UNIT III

66

many non-clustered indexes per table as you wish. Clustered indexes are much faster than non-clustered ones. A clustered index is usually very advantageous when many rows with contiguous values are being retrieved. But clustered indexes make the data addition and modification process slower, as the data in the table also have to be sorted along with that in the index. 11. Describe the NULLS in Action? Ans: SQL provides a special construct represented by the keyword NULL, which can be thought of as a literal representation of null. However, this construct cannot appear in all contexts in which literal can appear. According to the standard, There is no literal for null value, although the keyword NULL is used in some places to indicate that a null value is desired. The literal NULL can appear only in the following contexts: As a default specification within a column or domain definition. As an insert/update item specifying a value to be placed in a column position on INSERT/UPDATE As a CAST source operand As a CASE result As part of a referential specification. As a default specification within a column or domain definition :- A column definition can have the default definition as: DEFAULT {literal | Expression | NULL} So, if the column has an explicit DEFAULT clause, the value specified in that clause is the default. It can be literal or expression or NULL. If the column does not have an explicit DEFAULT clause, for the column and the domain, then the default is null. INSERT and UPDATE Operations :- The syntax of the INSERT statement is as follows :
INSERT INTO table [(column-list)]

Question Bank VALUES(val1, val2,valn);

67

If the statements contain an explicit column names that omit more than one column of the target table, then every row inserted into the target table by the INSERT statement will contain the appropriate default or null in each column. If a column that is omitted from the list does not have a default value, then it will result in an error. Similarly the syntax of the UPDATE statement is give below :
UPDATE table SET column=value WHERE condition

Here the value is either a scalar expression or one of the keywords DEFAULT or NULL. The rows of the table satisfying the WHERE clause will be updated with null values if the value in the SET clause is specified as NULL. CAST and CASE :- Cast converts a given scalar value to a given scalar data type. The syntax of CAST is CAST (scalar-expression AS {data-type | domain} The value of the scalar expression is converted either to the data type or to the data type underlying the domain. If any operand in the scalar expression evaluates to null, then the result of the operation evaluate to null. A CASE operation returns one of a specified set of scalar values depending on some condition. The format of CASE is given below. CASE WHEN conditional-expression THEN scalar-expression ELSE scalar-expression END Referential specification :- The ON DELETE clause is used for enforcing referential integrity. The ON DELETE clause defines the delete rule for the referenced table with respect to a foreign key. It defines what happens if an attempt is made to delete row(s) from the referenced table. One of the possible values for the ON DELETE clause is SET NULL. When ON DELETE SET NULL is specified, the target rows is deleted and each component of the foreign key is set to null in all matching rows in the referencing table.

UNIT III

68

Question Bank

69

12. What are the effects of NULLs? Ans: As a general rule, it is always better to avoid nulls. But there are certain cases when you cannot prevent having nulls in your tables. In order to use them properly, it very important to known how nulls will behave in different situations. Given below is a summary of the behavior of nulls. Null values in aggregate functions are ignored. Conditional statements are extended from the Boolean two value True/False Logic to a three valued True/False/Unknown logic All operators except || will return null if any of the operands are null. To test for null, the comparison operators IS NULL and IS NOT NULL must be used. A conversion function with null as an argument returns null. All aggregate functions except COUNT(*) ignore nulls in their calculations. The functions, which h have significant effect due to the presence of nulls in the data, are AVG( ) and COUNT( ). If an argument to COUNT is a constant or a column without nulls. COUNT will return the number of rows to which the conditions in the WHERE clause or GROUP BY clause applies. If the argument is a column that includes nulls, then COUNT will return the number of rows that are not null to which the conditions in the WHERE clause or grouping applies. The AVG function returns the average of a set of numbers. Average is obtained by dividing the sum of the numbers by the total number of elements in the set. But if one or more values of the set are null, that is if their values are unknown, then dividing them by the total number of elements will produce an incorrect result.

13. Write about the NULL Indicators? Ans: All aggregate functions with the exception of COUNT(*) returns a null if there are now rows matching the WHERE clause or if all the columns on which the function is operating contains nulls. Also imagine a situation where you are using the SELECTINTO SQL statement for retrieving some values into some host variables and then one column specified in the SELECT clause contains a null. In both these cases, if the result is moved INTO a host variable, then the program will abnormal end. So if it is possible that a value to be retrieved might be a null, the user should specify an indicator parameter in addition to the host variable into which the output value will be moved. This indicator variable is known as the null indicator. Consider the following example to see how the null indicator is specified : EXEC SQL SELECT SUM(SAL) INTO :SUM_SAL :NULL_INDICATOR FROM EMP WHERE SAL<9000; END-EXEC In the above example, we have defined a null indicator variable (NULL_INDICATOR) in addition to the host variable SUM_SAL. So if the value retrieved is null, and if the null indicator is specified, then the null indicator parameter is set to 1. If a non-null value is retrieved, then that value is moved to the regular host variable (SUM_SAL) and the value of the null indicator is set to zero. Null indicators are specified as shown in the above example, following the regular host variable and optionally separated from the regular host variable by the keyword INDICATOR. The null indicators must be of data type exact numeric with a scale of zero. For example, in the case of VS COBOL II the PICTURE clause of the null indicator is S9(4) COMP. So in embedded SQL, indicator variable can and must be used for preventing the applications from terminating

UNIT III

70

Question Bank

71

abnormally. You can check for the presence of null by checking the value of the null indicator and take the necessary actions as shown in the following example: EXEC SQL SELECT SAL INTO :SAL :SAL_INDICATOR FROM EMP WHERE EMPNO=7788; END-EXEC IF SAL_INDICATOR<0 THEN PERFORM EH-PARA THRU EH-PARA-EXIT ELSE CONTINUE END-IF In the above example, if the value of the null indicator SAL_INDICATOR is less than zero, it means that the SELECT statement has returned a null and appropriate error handling procedures are taken. If the null indicator is not negative, it means that a non-null value is moved to the host variable SAL and processing is continued. 14. Explain how the null values used with comparison operators? Ans: When nulls are involved in a comparison, the rules of the game change dramatically. Consider the basic comparison operators, shown below. Let us assume that x and y are compatible for comparison purposes. If x is null, y is null or both x and y are null, then each of the following comparisons evaluate to the unknown truth-value. Remember that we have stated that in SQL we use the three-valued True/False/Unknown logic. So the following comparisons evaluate to the Unknown truth-value: x=y; x<>y; x<y; x<=y; x>y; x>=y. Since in the above expressions either x or y or both x and y are null, the result of the comparison cannot be true, cannot be false it has to be unknown. For example, if

the value of x is unknown, the result of the comparisons x= 5, x>5 or x<>5 is also unknown. It should also be noted that two nulls are not considered to be equal that is, the comparison null=null is illegal. But the comparison x=x where x is null evaluates to the unknown truth-value. Same is the case with x>x, x<x, x>=x and so on. 15. Explain the following (a) BETWEENLIKE Operator (b) IN Operator (c) ALL, ANY and SOME Condition (d) ORDER BY clause. Ans: a) Between Like Operator : The Between keyword allows you to define a predicate in the form of a range. If a column value for a row falls within this range, then the predicate is true and the row will be added in the result table. The Between range test consists of two keywords, BETWEEN and AND. It must be supplied with the upper and the lower range values. The first value must be lower bound and the second value, the upper bound. Syntax: SELECT column_name(s) FROM table_name WHERE column_name BETWEEN value1 AND value2; Example : To List all employees who earns between 1000 and 2000. SELECT empno, ename, sal FROM emp WHERE sal BETWEEN 1000 AND 2000; b) IN Operator : SQL IN Operator helps you to specify multiple values in a WHERE Clause. Syntax : SELECT column_name(s) FROM table_name WHERE column_name IN (value1,value2,...); Example:

UNIT III

72

Question Bank

73

To list all employees who are working these 7698, 7788, 7839 managers. SELECT EMPNO, ENAME, JOB, SAL FROM emp WHERE empno IN (7698,7788,7839); c) ALL, ANY and SOME Condition : Use the ALL, ANY, and SOME keywords to specify what makes the condition TRUE or FALSE. ALL Condition : The ALL comparison condition is used to compare a value to a list or subquery. It must be preceded by =, !=, >, <, <=, >= and followed by a list or subquery. Example: List all employees who earn more than specified values. SELECT empno, ename, sal FROM emp WHERE sal > ALL(2000, 3000, 4000); The above query transformed to equivalent statement without ALL. SELECT empno, ename, sal FROM emp WHERE sal > 2000 AND sal > 3000 AND sal > 4000; ANY Condition : The ANY comparison condition is used to compare a value to a list or subquery. It must be preceded by =, !=, >, <, <=, >= and followed by a list or subquery. Example : To List all employees who earn more than any one specified value. SELECT empno, ename, sal FROM emp WHERE sal > ANY(2000, 3000, 4000); The above query transformed to equivalent statement without ANY. SELECT empno, ename, sal FROM emp WHERE sal > 2000 OR sal > 3000 OR sal > 4000;

SOME : The SOME and ANY comparison conditions do exactly the same thing and are completely interchangeable. (d) ORDER BY clause : The rows in the result table have not been ordered in any way. SQL just retrieved the rows in the order in which it found them in the table. Often, however, we need to list the output in a particular order. This could be in ascending order, in descending order, or could be based on either numerical value or text value. In such cases, we can use Order By clause to impose an order on the query results. The Order By keyword can only be used in Select statements. Examples: To list all employees on Ascending by name. SELECT empno, ename, sal FROM emp ORDER BY ename; To list all employees on ascending order of Depatrment and Descending order of sal. SELECT deptno, empno, ename, sal FROM emp ORDER BY deptno, sal DESC; 16. What is Query? Describe the different ways for creating a queries? Ans: Query is one, which enable you to extract a subset of data from a single table, from a group of related tables using criteria you define. To query data from tables in a database, we used the SELECT statement. The SELECT statement has many different options that one can use to retrieve the data you want. Selecting All Columns (SELECT *) : * is used to get all the columns of a particular table. For example the SQL, SELECT * FROM EMP; will give an entire copy of the table EMP. The star or * is the shorthand for the list of all column names in the

UNIT III

74

Question Bank

75

table(s). in the left-to-right order in which the columns appear in the table(s). Qualified Retrieval : Consider the query Get the empno, ename, and sal of all employees hired in department 10. The SQL will be something like this : SELECT empno, ename, sal FROM emp WHERE deptno=10; Eliminating Duplicates SELECT using DISTINCT : When you give a SELECT statement the RDBMS does not eliminate the duplicates from the result. To avoid duplicates from the result set, use the DISTINCT clause immediately after the SELECT. For example, select all job titles in the company SELECT DISTINCT job FROM emp; Select using IN Operator : If you want to get the rows which contain certain values, the best way to do it is to use the IN operator in conditional expression. For example, you want to get the employees in departments 10 and 20. SELECT * FROM emp WHERE deptno IN(10,20); Select using BETWEEN Operator : BETWEEN can be used to get those items that fall within a range. For example, we want to list all employees who earn between 1000 and 2000. SELECT * FROM emp WHERE sal BETWEEN 1000 AND 2000; Select using LIKE Operator : Like is a very powerful operator and also very useful. For example, if we want to get all the employees whose name starts with s. SELECT * FROM emp WHERE ename LIKE S%

Selecting Computed Values : The SQL statements can be used for retrieving the computed values without any problems. For example, we want to list empno, ename, sal, and annual salary from emp table. SELECT empno,ename,sal, sal*12 FROM EMP; 17. Give the syntax of the SELECT statement and explain the various options. Ans: The Syntax of the SELECT statement with most of the options is given below : SELECT [DISTINCT] Column or expression list FROM table(s) [WHERE conditional-expression] [GROUP BY column(s)] [HAVING conditional-expression] [ORDER BY column(s)] In the above syntax only the SELECT statement and the FROM clause are required. All the other four clauses WHERE, GROUP BY, HAVING and ORDER BY are optional. You include them in the SELECT statement only when you require the functions they provide. The SELECT statement lists the column names, computed values, the aggregate functions, etc. to be retrieved. The FROM clause specifies the table or tables from where the data has to be retrieved. The WHERE clause tells SQL to include only certain rows of data in the result set. It is in the WHERE clause you specify the search criteria. The GROUP BY clause specifies a summary query. This is usually used with aggregate functions like SUM, AVG, MAX, MIN etc. The HAVING clause tells SQL to include only certain groups produced by the GROUP BY clause in the query result set. HAVING clause is the equivalent of the WHERE clause and is used to specify the search criteria or search condition when GROUP BY clause is specified.

UNIT III

76

Question Bank

77

The ORDER BY clause sorts or orders the results based on the data in one or more columns in the ascending or descending order. If nothing is specified, the result set will be sorted in ascending order, which is the default. If you want the results to be sorted in the descending order, then you will have to specify the keyword DESC. 18. What is Subquery? Explain how the subqueries are executed? Ans: Subquery means placing an inner query within a WHERE or HAVING clause of outer query. The inner query provides values for the search condition of the outer query. Such queries are referred to as subqueries or nested subqueries, and may be nested multiple times. Sometimes either the joining or the subquery technique may be used to accomplish the same result, and different people will have different preferences about which technique to use. Other times, only a join or a subquery will work. The joining technique is useful when data from several relations are to be retrieved and displayed, and the relationships are not necessarily nested. Lets compare two queries that return the same results. What is the name and address of the customer who placed order number 1008? SELECT Cust_Name,Cust_Adrs, City,State FROM Customer C,Order O WHERE C.Cust_Id=O.Cust_Id AND Order_Id=1008; Now, look at the equivalent query using the subquery technique : What is the name and address of the customer who placed order number 1008? SELECT Cust_Name,Cust_Adrs, City,State FROM Customer WHERE Cust_Id=( SELECT CUS_ID FORM Order WHERE Order_Id=1008);

Note that the subquery, enclosed in parentheses, follows the form SQL queries and this one could stand on its own as an independent query. 19. What is a correlated subquery? Explain with examples. Ans: A correlated subquery is a nested subquery which is executed once for each candidate row considered by the main query and which on execution uses a value from a column in the outer query. This causes the correlated subquery to be processed in a different way from the ordinary nested subquery. A correlated subquery is identified by the use of an outer querys column in the inner querys predicate clause. With a normal nested subquery, the inner select runs first and it executes once, returning value to be used by the main query. A correlated subquery, on the other hand, executes once for each row (candidate row) considered by the outer query. The inner query is driven by the outer query. Steps to execute a correlated Subquery : 1. Get candidate row (fatted by outer query). 2. Execute inner query using candidate rows value. 3. Use value(s) resulting from inner query to qualify or disqualify candidate. 4. Repeat - until no candidate row remains. Although the correlated subquery executes repeatedly, once for each row in the main query, there is no suggestion that correlated subqueries are less efficient than ordinary noncorrelated subqueries. We will return to efficiency considerations later in this unit. We can use a correlated subquery to find employees who earn a salary greater than the average salary for their department. SELECT EMPNO, ENAME, SAL, DEPTNO FROM EMP E WHERE SAL >(SELECT AVG(SAL)

UNIT III

78

Question Bank

79

FROM EMP WHERE DEPTNO = E.DEPTNO) ORDER BY DEPTNO; Let us analyse the above example using the EMP table : The Main Query 1. Select first candidate row - Smith in department 20 earning $800. 2. EMP in FROM clause has alias E which qualifies DEPTNO column references in inner querys WHERE clause. 3. WHERE clause compares 800 against value returned by inner query. The Inner Query 4. Computers AVG(SAL) for employees department 5. WHERE department value is candidates department (E.DEPTNO) - value passed into inner query from outer querys DEPTNO column. 6. AVG(SAL) for SMITHs department - 20 - is $2175. 7. Candidate row does not meet condition, so discard. 8. Repeat from step 1 for next candidate row; ALLEN in department 30 earning $1600. 20. What are the rules to be followed when using aggregate functions. Ans: SQL provides six aggregate functions. These are powerful tools and can improve the data retrieval power considerably. There are some rules, which must be followed while using these functions. They are : For SUM and AVG the argument must be of type numeric. Except for the special case COUNT(*), the argument may be preceded by the key word DISTINCT to eliminate the duplicate rows before the function is applied to a column. The alternative to DISTINCT is ALL, which is the default. The DISTINCT is legal for MAX and MIN but meaningless.

The special function COUNT(*) which is used to all rows without any duplicate elimination and so the keyword DISTINCT is not allowed for this function. The argument cannot involve any aggregate function references or table expressions at any level of nesting. For example the SQL SELECT AVG(MIN(QTY) AS Average is illegal. Any NULL in the column is eliminated before the function is applied, regardless of whether DISTINCT is specified or not except in the case of COUNT(*) where nulls are handled like normal values. When using the MIN and MAX with string data, the comparison of the strings is dependent on the character set that is being used. In computers using ASCII character set, digits come before letters in the sorting sequence and all uppercase characters come before the lowercase characters. On Machines that use the EBCDIC character set, the order is lower case characters, uppercase characters and then digits. Because of this difference in collating sequence, a query using the ORER BY clause can produce different results in the two systems hence there will be differences in the results of the MIN and MAX functions. 21. What are aggregate functions? Explain with examples. Ans: The aggregate functions greatly enhance the power of the SQL statements. They let you summarize the data from the tables. An aggregate function takes an entire column of data as its argument and produces a single data item that summarizes the column. The Aggregate functions provided by SQL are COUNT( ) and COUNT(*) SUM( ) AVG( ) MAX( )

UNIT III

80

Question Bank

81

MIN( ) COUNT( ) and COUNT(*) : COUNT( ) is used to count the number of values in a column. COUNT(*) is used to count the number of rows of the query results. Consider the following example: Get the number of rows in the EMP table SELECT COUNT(*) FROM emp SUM( ) : The SUM( ) function is used to find the sum of the values in a column. The following Examples will illustrate its usage : Find the total sal for all the employees in the organization SELECT SUM(SAL) FROM emp; find the total sal for all employees in the department 10 SELECT SUM(SAL) FROM emp WHERE deptno=10; AVG( ) : The AVG( ) function is used to find the average of the values in a column. Consider the following queries: Find the average Salary of all employees SELECT AVG(sal) FROM emp; Find the average salary of all employees in department wise SELECT deptno, AVG(sal) FROM emp GROUP BY deptno; MAX( ) and MIN( ) : The MAX( ) function is used to find the maximum value in a column whereas the MIN( ) function is used to find the minimum value in a column. Find the maximum and minimum salary earned in the company SELECT MAX(sal), MIN(sal) FROM emp;

22. Write about insert, update and delete commands with examples. Ans: INSERT Statement :- This statement, as the name suggests, is used for inserting rows into a table. The general syntax of the insert statement is as follows: INSERT INTO table-name [(Column[,Column]..)] VALUES(literal[,literal]..); Or INSERT INTO table-name [(Column[,Column].)] subquery In the first format a single row is inserted into the table having specified values for specified columns. The first literal corresponds to the first column; the second literal corresponds to the second column and so on. In the second format the subquery is evaluated and a copy of the result are inserted into the table. Here also the one-to-one correspondence between the literals and column names holds. In both cases omitting the list of columns is equivalent to specifying all columns of the target table in their left-to-right order within that table. 1. INSERT INTO dept VALUES(40,ADMIN,TIRUPATI); 2. INSERT INTO dept1 SELECT * FROM dept; Bulk Inserts of Data : Many times, it will be required to insert thousands of rows into a table. These data might be stored in sequential files or might have to be compiled form different sources, and then inserted into the table. One way of doing it is to write a program, which reads the data from the source performs the necessary formatting (if required) and then inserts it into the table using the INSERT statement. But this can be a very

UNIT III

82

Question Bank

83

time-consuming process, especially if the number of rows to be inserted runs into thousands. In order to circumvent this problem, most database management systems provide some utilities, which performs the bulk loading of data. This is usually given as a separate utility and not as a part of the SQL as the standard does not address this function. Some examples are SQL*Loader utility of Oracle. LOAD utility of DB2 etc. UPDATE Statement :- The UPDATE Statement is used to modify or update an already existing row or rows of a table. The syntax for UPDATE statement is: UPDATE table-name SET column=scalar-expression[,column = scalar-expression] [WHERE condition] All rows in the table, which satisfy the condition, will be updated in accordance with the assignments in the SET clause. IF the WHERE clause is omitted, all rows in the table will be updated. Examples : 1. Change the PRICE of the book B01 to 600 UPDATE book SET price=600 WHERE book_id=B01; 2. Increase the price of all books, which are published before 1997 by 20% UPDATE book SET price=price*1.2 WHERE year=1997; 3. Change the publisher name for all KSRP books to KSR PUBLISHERS and reduce Rs.15 from the price. UPDATE book SET pulisher=KSR PUBLISHERS,price=price-15 WHERE publisher=KSRP; DELETE Statement :- The DELETE statement is used to delete an already existing row or rows from a table. The syntax for DELETE statement is: DELETE FROM table-name

[WHERE condition] All rows in the table, which satisfy the condition, will be deleted. If the WHERE clause is omitted all rows will be deleted. Consider the following examples. 1. Delete all the books published by Pan Books DELETE FROM book WHERE publisher=PAN BOOKS; 2. Delete all rows from the BOOK table DELETE FROM book;

Question Bank

85

UNIT IV
1. What is Join? Explain different types of joins? Ans: Join is a query in which data is retrieved from more than one table. A join matches data from two or mote tables, based on the values of one or more columns in each table. All matches are combined; creating a resulting row that is the concatenation of the columns from each table where specified columns match. Theta Joins, Equi-joins and Non-Equi-joins : The Theta-join between two relations R and S defines a relation that contains rows satisfying a predicate from the Cartesian product of R and S and is denoted by R.ai S.bi, where may be one of the comparison operators =, <>, <,<=,>,>=. If the comparison operator is equality, then the join is called an equijoin. If the comparison operator is not the equal sign then it a non-equijoin. 1. List empno,ename and dname from emp and dept relations (Equi-Join) SELECT empno,ename, dname FROM emp, dept WHERE emp.deptno=dept.deptno; 2. List empno,ename, sal and Grade from emp and salgrade relations.(Non-Equi-Join) SELECT empno,ename, sal, grade FROM emp e, salgrade s WHERE e.sal BETWEEN s.losal AND s.hisal; Self-Join Joining a Table with itself :- In a selfjoin, a table is joined to itself. It compares values with a column of a single table. For example consider the following query. To list employees who earns less than their manager.
84

SELECT e.ename EMP_NAME,e.sal EMP_SAL,m.ename MGR_NAME, m.sal MGR_SAL FROM emp e, emp m WHERE e.mgr=m.empno; Outer Joins :- When tables are joined, rows, which contain matching values in the join predicates, are returned. Sometimes, you may want both matching and non-matching rows returned for the tables that are being joined. This kind of an operation is known as an outer-join. The missing row(s) can be returned if an outer join operator is used in the join condition. The operator is a plus sign enclosed in parentheses (+), and is placed on the side of the join (table) which is deficient in information. The operator has the effect of creating one or more NULL rows, to which one or more rows from the non-deficient table can be joined. One NULL row is created for every additional row in the nondeficient table. Example SELECT e.ename, d.deptno, d.dname FROM emp e, dept d WHERE e.empno (+) = d.deptno; The outer-join operator can only appear on one side of the expression the side that has information missing. It returns those rows from one table which have no direct matching in the other table. 2. What is union operation? Explain Ans: The union operation combines two sets of rows into a single set composed of all the rows in either or both of the two original sets are union compatible. For union compatibility: The two sets must contain the same number of columns Each column of the first set must be either the same data type as the corresponding column of the second set or convertible to the same data type as corresponding column of the second set. The syntax for UNION is : SELECT statement

UNIT IV

86

Question Bank

87

UNION [ALL] SELECT statement Consider the following query. Get the details of all jobs in the company. The SQL is given below SELECT job FROM emp WHERE deptno=10 UNION SELECT job FROM emp WHERE deptno=20; Duplicate Elimination :- In theory, the union of two sets cannot contain duplicates. But most RDBMSs provide the provision of retaining or eliminating the duplicates. The UNION verb eliminates duplicates but UNION ALL retains them. Duplicates are always eliminated from the results table of a UNION unless the UNION ALL operator is specified. But in some cases it improves performance to use UNION ALL instead of UNION. Example: SELECT job FROM emp WHERE deptno=10 UNION ALL SELECT job FROM emp WHERE deptno=20; Ordering the Results :- Any number of SELECTs can be UNIONed together as long as they are UNION compatible. Although each SELECT statement can have its own WHERE clause, the query as a whole takes only one ORDER BY clause. Any ORDER BY clause in the query must appear as part of the final SELECT statement and must identify ordering columns by their ordinal position or number and not by name. Example: SELECT job,deptno

FROM emp WHERE deptno=10 UNION ALL SELECT job FROM emp,deptno WHERE deptno=20 ORDER BY 1; 3. Write a short notes on PL/SQL. Ans:PL/SQL stands for Procedural Language/SQL. PL/SQL extends SQL by adding control Structures found in other procedural language. PL/SQL combines the flexibility of SQL with Powerful feature of 3rd generation Language. The procedural construct and database access are present in PL/SQL. PL/SQL can be used in both in database in Oracle Server and in Client side application development tools. PL/SQL is closely integrated into the SQL language. PL/SQL also implements basic exception handling. Advantages of PL/SQL Support for SQL, support for object-oriented programming, better performance, portability, higher productivity, Integration with Oracle a) Supports the declaration and manipulation of object types and collections. b) Allows the calling of external functions and procedures. c) Contains new libraries of built in packages. d) With PL/SQL, an multiple SQL statements can be processed in a single command line statement 4. Discuss about PL/SQL Block in detail Ans: Each PL/SQL program consists of SQL and PL/SQL statements which from a PL/SQL block. A PL/SQL Block consists of three sections: The Declaration section (optional). The Execution section (mandatory). The Exception (or Error) Handling section (optional).

UNIT IV

88

Question Bank

89

Declaration Section: The Declaration section of a PL/SQL Block starts with the reserved keyword DECLARE. This section is optional and is used to declare any placeholders like variables, constants, records and cursors, which are used to manipulate data in the execution section. Placeholders may be any of Variables, Constants and Records, which stores data temporarily. Cursors are also declared in this section. Execution Section: The Execution section of a PL/SQL Block starts with the reserved keyword BEGIN and ends with END. This is a mandatory section and is the section where the program logic is written to perform any task. The programmatic constructs like loops, conditional statement and SQL statements form the part of execution section. Exception Section: The Exception section of a PL/SQL Block starts with the reserved keyword EXCEPTION. This section is optional. Any errors in the program can be handled in this section, so that the PL/SQL Blocks terminates gracefully. If the PL/SQL Block contains exceptions that cannot be handled, the Block terminates abruptly with errors. Every statement in the above three sections must end with a semicolon ; . PL/SQL blocks can be nested within other PL/SQL blocks. Comments can be used to document code. This is how a sample PL/SQL Block looks. DECLARE Variable declaration BEGIN Program Execution EXCEPTION Exception handling END;

5. Discuss about PL/SQL Control Statements. Ans: As the name implies, PL/SQL supports programming language features like conditional statements, iterative statements. If then Else Statement : This is a selection statement it is used to select statements to be executed based on the specified condition. If the specified condition is true the statements following then will be executed otherwise statements following else will be executed. Its syntax is IF condition THEN statement 1; ELSE statement 2; END IF; Iterative Statements : An iterative control Statements are used when we want to repeat the execution of one or more statements for specified number of times. There are three types of loops in PL/SQL: Simple Loop While Loop For Loop 1) Simple Loop A Simple Loop is used when a set of statements is to be executed at least once before the loop terminates. An EXIT condition must be specified in the loop, otherwise the loop will get into an infinite number of iterations. When the EXIT condition is satisfied the process exits from the loop. The General Syntax to write a Simple Loop is: LOOP statements; EXIT; {or EXIT WHEN condition;} END LOOP; These are the important steps to be followed while using Simple Loop.

UNIT IV

90

Question Bank

91

1) Initialise a variable before the loop body. 2) Increment the variable in the loop. 3) Use a EXIT WHEN statement to exit from the Loop. If you use a EXIT statement without WHEN condition, the statements in the loop is executed only once. 2) While Loop : A WHILE LOOP is used when a set of statements has to be executed as long as a condition is true. The condition is evaluated at the beginning of each iteration. The iteration continues until the condition becomes false. The General Syntax to write a WHILE LOOP is: WHILE <condition> LOOP statements; END LOOP; Important steps to follow when executing a while loop: 1) Initialise a variable before the loop body. 2) Increment the variable in the loop. 3) EXIT WHEN statement and EXIT statements can be used in while loops but it's not done oftenly. 3) FOR Loop : A FOR LOOP is used to execute a set of statements for a predetermined number of times. Iteration occurs between the start and end integer values given. The counter is always incremented by 1. The loop exits when the counter reachs the value of the end integer. The General Syntax to write a FOR LOOP is: FOR counter IN val1..val2 LOOP statements; END LOOP; val1 - Start integer value. val2 - End integer value. Important steps to follow when executing a while loop: 1) The counter variable is implicitly declared in the declaration section, so it's not necessary to declare it explicity. 2) The counter variable is incremented by 1 and does not need to be incremented explicitly.

3) EXIT WHEN statement and EXIT statements can be used in FOR loops but it's not done oftenly. 6. What is cursor? Explain what are the operations performed on Cursors. Ans: A cursor is an SQL object that is associated with a specific table expression. The RDBMS uses cursors to navigate through a set of rows returned by an embedded SQL SELECT statement. A cursor can be compared to a pointer. The programmer declares a cursor and defines the SQL statement for the cursor. After that you can use the cursor like a sequential file. The cursor is opened, rows are fetched from the cursor, one row at a time, and at the end of processing the cursor is closed. Cursor Operations :- The four operations that must be performed for the successful working of the cursor are : Declare This statement defines the cursor, gives it a name to it and assigns an SQL statement to it. The DECLARE statement does not execute the SQL statement but merely defines it. OPEN This makes the cursor ready for row retrieval. OPEN is an executable statement. IT reads the SQL search fields, executes the SQL statement and sometimes builds the result table. FETCH This statement returns data from the result table one row at a time to the host variables. If the result table is not built at the OPEN time, it is built during FETCH. CLOSE Releases all resources used by the cursor. When cursors are used to process multiple rows, the cursor is DECLAREd and OPENed and the FETCH statement is coded in a loop that reads and processes each row. At the end of the processing, that is when there are no more rows to be fetched. The cursor is then CLOSEd. You can modify or delete a row by using the SQL statements UPDATE and DELETE. But if you want to read a row and depending upon the values in the row, you want to

UNIT IV

92

Question Bank

93

modify, delete or do nothing, you can do that with a cursor. This is accomplished with a cursor and a special clause of UPDATE and DELETE statements usable only by embedded SQL statements, namely WHERE CURRENT OF. The cursor is declared with a special FOR UPDATE OF clause. 7. What do you mean by cursor positions? Ans: When a cursor is open, it designates a certain collection of rows and a certain ordering for that collection. It also designates a certain position with respect to that ordering. The possible positions are : On some specific row (ON State) Before some specific row (BEFORE State) After some specific row (AFTER State) Cursor state is affected by a variety of operations. OPEN positions the cursor before the first row. FETCH NEXT positions the cursor on the next row or if there is no next row, after the last row. FETCH PRIOR positions the cursor on the prior row or before the first row, if there is no prior row. There are other FETCH formats like FIRST, LAST, ABSOLUTE n, RELATIVE n, etc. If the cursor is on some row and that row is deleted by that cursor, the cursor is positioned before the next row or after the last row. The ABSOLUTE n refers to the nth row in the ordered table that the cursor is associated with. A negative value for n means n rows backward from the end of the table. RELATIVE n refers to the nth row in the table relative to the row on which the cursor is currently positioned. The n in ABSOLUTE and RELATIVE row-selectors can be a literal, parameter or host variable of exact numeric data type with a scale of zero. All cursors are in the closed state at transaction initiation and are forced into the closed state at transaction termination. While the transaction is being executed, the same cursor can be opened and closed any number of times.

8. What are the various guidelines that should be followed while coding cursors? Ans: When coding embedded SQL statements using cursors, the following guidelines will improve the performance and maintainability of the program : Declare as many cursors as needed. There is no limit on the number of cursors that can be used in a program. Avoid using certain cursors for modification. A cursor cannot be used for updates or deletes if the DECLARE CURSOR statement includes a UNION, DISTINCT, GROUP BY, ORDER BY, HAVING clauses, joins, subqueries, correlated subqueries, or tables in read-only mode etc. Include only the columns that are being updated. Always use FOR UPDATE OF when updating with a cursor. Although it is not necessary, it is always a good practice to code FOR UPDATE OF clause, in the DECLARE CURSOR statement used for deleting rows. This clause will lock the row before it is being deleted, thus preventing some other user from accessing it, thus ensuring data integrity. Use WHERE CURRENT OF to delete single rows using a cursor. Use where CURRENT OF clause on UPDATE and DELETE statements that are meant to modify only a single row. Failure in doing this will result in the modification or delete of all the rows that are being processed. Avoid the use of FOR UPDATE OF clause on nonupdateable cursors. Do not code FOR UPDATE OF clause on cursors that access read-only data. Open cursors before fetching Initialize host variables before opening the cursor Explicitly close the cursors. Even thought the RDBMS closes all open cursors at the end of the program, explicitly close the cursors using the CLOSE statement; otherwise you will be holding resources, which will affect the performance.

UNIT IV

94

Question Bank

95

9. Discuss about Sub Procedures and Function in PL/SQL. Ans: A stored procedure or in simple a proc is a named PL/SQL block which performs one or more specific task. This is similar to a procedure in other programming languages. A procedure has a header and a body. The header consists of the name of the procedure and the parameters or variables passed to the procedure. The body consists or declaration section, execution section and exception section similar to a general PL/SQL Block. A procedure is similar to an anonymous PL/SQL Block but it is named for repeated usage. We can pass parameters to procedures in three ways. 1) IN-parameters 2) OUT-parameters 3) IN OUT-parameters The below example creates a procedure employer_details which gives the details of the employee.
CREATE OR REPLACE PROCEDURE employer_details IS CURSOR emp_cur IS SELECT first_name, last_name, salary FROM emp_tbl; emp_rec emp_cur%rowtype; BEGIN FOR emp_rec in sales_cur LOOP dbms_output.put_line(emp_cur.first_name || ' ' ||emp_cur.last_name || ' ' ||emp_cur.salary); END LOOP; END;

A function is a named PL/SQL Block which is similar to a procedure. The major difference between a procedure and a function is, a function must always return a value, but a procedure may or may not return a value. The General Syntax to create a function is:
CREATE [OR REPLACE] FUNCTION function_name [parameters] RETURN return_datatype; IS Declaration_section BEGIN Execution_section Return return_variable; EXCEPTION exception section Return return_variable; END;

1) Return Type: The header section defines the return type of the function. The return datatype can be any of the oracle datatype like varchar, number etc. 2) The execution and exception section both should return a value which is of the datatype defined in the header section. For example, lets create a frunction called ''employer_details_func' similar to the one created in stored proc
1> CREATE OR REPLACE FUNCTION employer_details_func 2> RETURN VARCHAR(20); 3> IS 5> emp_name VARCHAR(20); 6> BEGIN 7> SELECT first_name INTO emp_name 8> FROM emp_tbl WHERE empID = '100'; 9> RETURN emp_name; 10> END; 11> /

There are two ways to execute a procedure. 1) From the SQL prompt. EXECUTE [or EXEC] procedure_name; 2) Within another procedure simply use the procedure name. procedure_name; Functions :

In the example we are retrieving the first_name of employee with empID 100 to variable emp_name. The return type of the function is VARCHAR which is declared in line no 2.

UNIT IV

96

Question Bank

97

The function returns the 'emp_name' which is of type VARCHAR as the return value in line no 9. A function can be executed in the following ways. 1) Since a function returns a value we can assign it to a variable. employee_name := employer_details_func; If employee_name is of datatype varchar we can store the name of the employee by assigning the return type of the function to it. 2) As a part of a SELECT statement SELECT employer_details_func FROM dual; 3) In a PL/SQL Statements like, dbms_output.put_line(employer_details_func); This line displays the value returned by the function. 10. What is Package? Explain Ans: A package is a collection of PL/SQL elements that are "packaged" or grouped together within a special BEGINEND. Here is a partial list of the kinds of elements you can place in a package: Cursors Variables (scalars, records, tables, etc.) and constants Exception names and pragmas for associating an error number with an exception PL/SQL table and record TYPE statements Procedures and functions Packages provide a structure to organize your modules and other PL/SQL elements. They encourage proper structured programming techniques in an environment that often befuddles the implementation of structured programming. When you place a program unit into a package you automatically create a "context" for that program. By collecting related PL/SQL elements in a package. The PL/SQL package is a deceptively simple, yet powerful construct. It consists of up to two distinct parts: the specification and the body.

The package specification , which defines the public interface (API) of the package: those elements that can be referenced outside of the package. The package body , which contains the implementation of the package and elements of the package you want to keep hidden from view. In the example below, you package a record type, a cursor, and two employment procedures. Notice that the procedure hire_employee uses the database sequence empno_seq and the function SYSDATE to insert a new employee number and hire date, respectively.
CREATE OR REPLACE PACKAGE emp_actions AS -- spec TYPE EmpRecTyp IS RECORD (emp_id INT, salary REAL); CURSOR desc_salary RETURN EmpRecTyp; PROCEDURE hire_employee ( ename VARCHAR2, job VARCHAR2, mgr NUMBER, sal NUMBER, comm NUMBER, deptno NUMBER); PROCEDURE fire_employee (emp_id NUMBER); END emp_actions; CREATE OR REPLACE PACKAGE BODY emp_actions AS -- body CURSOR desc_salary RETURN EmpRecTyp IS SELECT empno, sal FROM emp ORDER BY sal DESC; PROCEDURE hire_employee ( ename VARCHAR2, job VARCHAR2, mgr NUMBER, sal NUMBER, comm NUMBER, deptno NUMBER) IS BEGIN INSERT INTO emp VALUES (empno_seq.NEXTVAL, ename, job, mgr, SYSDATE, sal, comm, deptno); END hire_employee;

UNIT IV PROCEDURE fire_employee (emp_id NUMBER) IS BEGIN DELETE FROM emp WHERE empno = emp_id; END fire_employee; END emp_actions;

98

Question Bank

99

Only the declarations in the package spec are visible and accessible to applications. Implementation details in the package body are hidden and inaccessible. So, you can change the body (implementation) without having to recompile calling programs. Advantages of Packages : Packages offer several advantages: modularity, easier application design, information hiding, added functionality, and better performance. Modularity : Packages let you encapsulate logically related types, items, and subprograms in a named PL/SQL module. Each package is easy to understand, and the interfaces between packages are simple, clear, and well defined. This aids application development. Easier Application Design : When designing an application, all you need initially is the interface information in the package specs. You can code and compile a spec without its body. Then, stored subprograms that reference the package can be compiled as well. You need not define the package bodies fully until you are ready to complete the application. Information Hiding : With packages, you can specify which types, items, and subprograms are public (visible and accessible) or private (hidden and inaccessible). For example, if a package contains four subprograms, three might be public and one private. The package hides the implementation of the private subprogram so that only the package (not your application) is affected if the implementation changes. This simplifies maintenance and enhancement. Also, by hiding implementation details from users, you protect the integrity of the package.

Added Functionality : Packaged public variables and cursors persist for the duration of a session. So, they can be shared by all subprograms that execute in the environment. Also, they allow you to maintain data across transactions without having to store it in the database. Better Performance : When you call a packaged subprogram for the first time, the whole package is loaded into memory. So, later calls to related subprograms in the package require no disk I/O. Also, packages stop cascading dependencies and thereby avoid unnecessary recompiling. For example, if you change the implementation of a packaged function, Oracle need not recompile the calling subprograms because they do not depend on the package body. 11. What is Trigger? Explain Different types of Triggers? Ans: A trigger defines an action the database should take when some database related event occurs. For any event that causes a change in the contents of a table. Triggers enhance the ability of the database users to make the database respond actively to changes within or outside the database. Types of Triggers :There are twelve basic types of triggers. A triggers type is defined by the type of triggering transaction and by the level at which the trigger is executed. The following describe these classifications. Row-level triggers Statement-Level Triggers BEFORE and AFTER Triggers Row-Level Triggers :- Row-Level Triggers, trigger once for each row in a transaction. These types of triggers are very useful in cases like audit trails, where you want to track the modification made to the data in a table. Different RDBMS have implemented the row-level triggers in different ways. For example, in ORACLEs PL/SAL, you can create a row-level trigger by using the FOR EACH ROW clause in the CREATE TRIGGER command.

UNIT IV

100

Question Bank

101

Statement-Level Triggers :- Statement-Level Triggers execute once for each transaction. For example, if a single transaction inserted 700 rows into a table then a statement-level trigger on that table will be executed only once. Statement-level triggers therefore are not often used for data related activities. They are normally used to enforce additional security measures on types of transactions that may be performed on a table. Statement-Level triggers are the default type of triggers created using CREATE TRIGGER command. BEFORE and AFTER Triggers :- Since triggers occur because of events, they may set to occur immediately before or after those events. Since the events that execute triggers are database transactions, triggers can be executed immediately before or after INSERTs, UPDATEs and DELETEs. Within a trigger, you will be able to reference the old and new values involved in the transaction. The access required for the old and new data may determine which type of trigger you need. OLD refers to the data, as it existed prior to the transaction. Updates and deletes usually reference old values. New values are the data values that the transaction creates. They are referred by the keyword NEW. If you need to set a column value in an inserted row via your trigger, then you will need to use a BEFORE INSERT trigger in order to access the NEW values. Using an AFTER INSERT trigger would not allow you to set the inserted value, since the row will already have been inserted into the table. AFTER row-level triggers are frequently used in auditing applications, since they do not fire until the row has been modified. Since the row has been successfully modified, this implies that it has successfully passed the referential integrity constraints defined for that table.

12. Discuss about Trigger Syntax? Ans: The syntax for the CREATE TRIGGER command is shown below : CREATE [or REPLACE] TRIGGER trigger_name [BEFORE | AFTER] [DELETE | INSERT | UPDATE [OF column_name]] ON [user.] table_name [FOR EACH ROW] [WHEN condition] [PL/SQL block]; Clearly there is a great deal of flexibility in the design of a trigger. The BEFORE and AFTER keywords indicate whether the trigger should be executed before or after the triggering transaction. The DELETE, INSERT and UPDATE keywords indicate the type of data manipulation that will constitute the triggering event. When the FOR EACH ROW clause is used, the trigger will be a row-level-trigger, otherwise, it will be a statement-level trigger. The WHEN clause is used to further restrict when the trigger is executed. The restrictions enforced in the WHEN clause may include checks of old and new data values. 13. How do you create, replace and drop the triggers? Ans: The CREATE TRIGGER command is used to create or replace the Database triggers. For example, Suppose we want to monitor any changes to the amount that is greater than 40%. The following row-level BEFORE UPDATE trigger will be executed only if the new value of the Amount column is more than 40% its old value. CREATE TRIGGER ledger_bef_upd_row BEFORE UPDATE ON ledger FOR EACH ROW WHEN (NEW.amount/OLD.amount>1.4) BEGIN INSERT INTO Ledger_audit VALUES(:OLD.Action_date,:OLD.action, :OLD.amount,:OLD.item) END;

UNIT IV

102

Question Bank

103

CREATE TRIGGER ledger_bef_upd_row The trigger is created and named. BEFORE UPDATE ON Ledger This trigger applies on the Ledger table. It will be executed before update transaction have been committed to that database. FOR EACH ROW Because the for each row clause is used, the trigger will apply to each row in the transaction. IF the clause is not used, then the trigger will execute only at the statement level. WHEN (NEW.amount/OLD.amount>1.4) The when clause adds further restrictions to the triggering condition. The triggering event must not only be an update of the Ledger table, but also must reflect an increase of over 40% in the value of the amount column. BEGIN Makes the beginning of the block INSERT INTO Ledger_audit VALUES(:OLD.Action_date,:OLD.action, :OLD.amount,:OLD.item) The PL/SQL code is the trigger body. The commands shown here are to be executed for every update of the Ledger table that passes the when condition. IN order for this to succeed, the Ledger_audit table must exist, and the owner of the trigger must have been granted privileges on that table. This particular example inserts the old values of the Ledger table into the Ledger_audit table before the Ledger record is updated. When referencing the NEW and OLD keywords in the PL/SQL block colons (:) precede them. END; Marks the end of the PL/SQL block. Replacing Triggers : The body of a trigger cannot be altered. Only the status can be altered. To alter the body the trigger must be re-created or replaced. When replacing a trigger, you should use CREATE OR REPLACE TRIGGER command. Using the OR REPLACE option will maintain any grants made for the original version of the trigger. The alternative solution is dropping and re-creating the trigger, but it will drop all the grants made for the trigger.

Dropping Triggers :- Triggers may be dropped using the DROP trigger Command. 14. Write about Combining Trigger Types and Setting Inserted values? Ans: Combining Trigger Types :- Triggers for multiple INSERT, UPDATe, and DELETE commands on a table can be combined into a single trigger, provided they are all at the same level i.e. row or statement level. Consider the following example. CREATE TRIGGER ledger_bef_ins_row BEFORE INSERT OR UPDATE OF amount ON ledger FOR EACH ROW BEGIN If INSERTING THEN INSERT INTO Ledger_audit VALUES(:NEW.Action_date,:NEW.action, :NEW.amount,:NEW.item) ELSE INSERT INTO Ledger_audit VALUES(:OLD.Action_date,:OLD.action, :OLD.amount,:OLD.item) END IF; END; The above example shows a trigger that is executed whenever an INSERT or an UPDATE occurs. The UPDATE portion of the trigger only occurs when the Amount column is updated and If clause is used within the PL/SQL block to determine which of the two commands executed the trigger. Combining trigger types in this manner may help to coordinate trigger development among multiple developers, since it helps to consolidate all of the database events on a single table. Setting Inserted Values:- We may use triggers to set values during inserts and updates. For example, we may have partially denormalized our table to include derived data. Sometimes the derived columns may not be in synchronize with the base table column. For instance we may have a column CONTACTNAME and a derived column

UNIT IV

104

Question Bank

105

UPPERCONTACTNAME which is nothing but UPPER(CONTACTNAME). Since UPPERCONTACTNAME is derived data, it may be out of synchronize with the CONTACTNAME column. That is there may be times immediately after transactions during which UPPERCONTACTNAME is not equal to UPPER(CONTACTNAME). Consider an insert into the table. Unless our application supplies a value for UPPERCONTACTNAME during inserts, that columns value will be NULL. To avoid this synchronization problem, we may use a database trigger. Put a BEFORE INSERT and BEFORE UPDATE trigger on the table. They will act at the row level, as shown in the following listing and will set the value for UPPERCONTACTNAME every time contact name is changed. CREATE TRIGER contacts_bef_upd_ins_row BEFORE INSERT OR UPDATE OF contactname ON contacts FOR EACH ROW BEGIN :NEW.uppercontactname:=UPPER(contactname) END; In the above example, the trigger body determines the value for the UPPERCONTACTNAME by using the UPPER function on the CONTACTNAME column. This trigger will be executed every time a row is inserted into the CONTACTS table and every time the CONTACTNAME column is updated. The two columns will thus be kept in synchronize. 15. Discuss about disabling and enabling triggers? Ans: Triggers only effect transactions of the specified type and that too only when the trigger is enabled. The trigger will not affect any transactions created prior to a triggers creation. By default, a trigger is enabled when it is created. However, there are situations in which you may wish to disable a trigger. The two most common reasons involve data loads. During large data loads, you may wish to disable triggers that would execute during the load. Disabling the triggers during data loads will dramatically improve the performance of the data load. Once the data has been loaded, you will need to

manually perform the data manipulation that the trigger would have performed had it been enabled during the data load. The second data load related reason for disabling a trigger occurs when a data load fails and has to be performed a second time. In such a case, it is likely that the data load practically subsequent data load, the same records would be inserted. Thus it is possible that the same trigger will be fired twice for the same transaction. Depending on the nature of the transaction and the triggers, this may not be desirable. If the trigger was enabled during the failed load, then it manually perform the data manipulation that the trigger would have performed. To enable a trigger, use the ALTER TRIGGER command with the ENABLE ALL Triggers clause. You cannot enable specific triggers with this command you can disable a trigger using the same AFTER TRIGGER command with any of the clause DISABLE or DISABlE ALL TRIGGERS. 16. What are the Advantages and Limitations of Triggers? Ans: Triggers can be used to make the RDBMS take some action, when a database-related event occurs. The following are the advantages and limitations of triggers. The most important advantage of a triggers is that business rules can be stored in the database and enforced consistently with each and every update operation. This can be dramatically reduce the complexity of the application programs that access the database. But triggers have some disadvantages also. When the business rules are moved into the database with the help of triggers, setting up the database becomes a more complex task. Also, with triggers, the rules are hidden in the database and the application programs, which appear deceptively simple and straightforward, can cause an enormous amount of

UNIT IV

106

database activity. The programmer no longer has total control over what happens to the database, because a program initiated database action may cause many other actions.

UNIT V
1. Explain Client/Server Architecture? Ans: Client/Server environment use a local area network to support a network of personal computers, each with its own storage, that are also able to share communication devices (such as a hard disk or printer) and software (such as DBMS) attached to the LAN. The several Client/Server architectures that have evolved can be distinguished by the distribution of application logic components across clients and servers. There are three components of application logic. a. The first is the Input/Output logic component. This component is responsible for formatting and presenting data on the users screen and for managing user input from a keyboard or other input device. The b. The second component is the processing component. It handles data processing logic, business rules logic, and data management logic. Data processing logic includes such activities as data validation and identification of processing errors. Business rules that have not been coded at the DBMS level may be coded in the processing component. Data management logic identifies the data necessary for processing the transaction or query. c. The third component is storage, the component responsible for data storage and retrieval from the physical storage device associated with the application. Activates of a DBMS occur in the storage component logic.
107

UNIT V

108

Question Bank

109

File Server Architectures : The first client/server architectures developed were file servers. In a basic file server environment, all data manipulation occurs at the workstations where data are requested. The client handles the presentation logic, processing logic, and much of the storage logic. One or more file servers are attached to the LAN. A file sever is a device that manages file operations and is shared by each of the client PCs attached to the LAN. Each of these file servers acts as an additional hard disk for each of the client PCs. Limitations of File Servers : Considerable data movement is generated across the network. Each client workstation must devote memory to a full version of the DBMS. This means that there is less room in memory for application programs on the client PC. The DBMS copy in each workstation must manage the shared database integrity. In addition, each application program must recognize. Database Server Architectures This is a two-tiered approach to client/server architectures. In this system the client workstation is responsible for managing the user interface, including presentation logic, data processing logic and business rules logic, and the database server is responsible for database storage, access, and processing. In this architecture the DBMS is placed on the database server, LAN traffic is reduced, because only those records that match the requested criteria are transmitted to the client station, rather than entire data files. Some people refer to the central DBMS functions as the back-end functions, whereas they call the application programs on the client PCs front-end programs. Moving the DBMS to the database server has several advantages :

Only the database server requires processing power adequate to handle the database. The database is stored on the server, not on the client. Network traffic is reduced User authorization, integrity checking, data dictionary maintenance, query and update processing are all performed in one location, on the database server.
Client Client Client

LAN

Database Server

DATA

Fig. Database Server Architectures (2-tier) The use of stored procedures, modules of code that implemented application logic, which are included on the database server, pushed the database server architecture toward being able to handle more critical business applications. The stored procedures have the following advantages. Performance improves for complicated SQL statements. Network traffic decreases as processing moves from the client to the server.

UNIT V

110

Question Bank

111

Security improves if the stored procedure is accessed rather than the data and code is moved to the server, away from direct end-user access. Data integrity improves as multiple applications access the same stored procedure. Stored procedures result in a thinner client and a fatter database server. 2. Discuss about File Server Architecture? Ans: The first client/server architectures developed were file servers. In a basic file server environment, all data manipulation occurs at the workstations where data are requested. The client handles the presentation logic, processing logic, and much of the storage logic. One or more file servers are attached to the LAN. A file server is a device that manages file operations and is shared by each of the client PCs attached to the LAN. Each of these file servers acts as an additional hard disk for each of the client PCs. For example, a file server on the LAN. Programs on your PC refer to files on this drive by the typical path specification involving this drive and any directories, as well as the file name. With a file server, each client PC may be called a fat client, one where most processing occurs on the client rather than on a server. In a file server environment, each client PC is authorized to use the DBMS when a database application program runs on that PC. Thus, there is one database but many concurrently running copies of the DBMS, one on each of the active PCs. The primary characteristic of file server architecture is that all data manipulation is performed at the client PCs, not at the file server. The file server acts simply as a shared data storage device. Software at the file server queues access requests, but it is up to the application program at each client PC, working with the copy of the DBMS on that PC, to handle all data management functions. For example, data security checks and file and record locking are initiated at the client PCs in this environment.

Limitations of File Servers : Considerable data movement is generated across the network. Each client workstation must devote memory to a full version of the DBMS. This means that there is less room in memory for application programs on the client PC. The DBMS copy in each workstation must manage the shared database integrity. 3. Discuss about Database Server Architecture? Ans: In Database Server Architecture, the client workstation is responsible for managing the user interface, including presentation logic, data processing logic, and business rules logic, and the database server is responsible for database storage, access, and processing. Fig. shows a typical database server architecture. With the DBMS placed on the database server, LAN traffic is reduced, because only those records that match the requested criteria are transmitted to the client station, rather than entire data files. Some people refer to the central DBMS functions as the back-end functions, whereas

UNIT V

112

Question Bank

113

they call the application programs on the client PCs the frontend programs.

Network traffic decreases as processing moves from the client to the server. Security improves if the stored procedure is accessed rather than the data, and code is moved to the server, away from direct end-user access. Data integrity improves as multiple applications access the same stored procedure. Stored procedures result in a thinner client and a fatter database server. However, writing stored procedures takes more time than using Visual Basic or PowerBuilder to create an application. Also, the proprietary nature of stored procedures reduces their portability and may make it difficult to change DBMSs without having to rewrite the stored procedures. 4. Discuss about 3-Schema Architecture in detail? Ans : The different Views or Models of a database development in a system development project. They are : 1) Conceptual Schema (During Analysis Phase) 2) External Schema (During analysis, logical design phase) 3) Physical Schema (During Physical design phase) These are all views or models of the same organizational database i.e each organizational database has one physical, one conceptual and one or more user views. Thus, the three schema architecture defines one database with multiple ways to look into same set of data. The following figure shows the relationship between three views of a database. 1) Conceptual Schema : i) A Conceptual Schema is a detailed specification on the overall structure of organizational data that is independent of any database management technology. ii) A conceptual Schema defines the whole database without reference to how data are stored in a computers secondary memory. iii) Usually a conceptual schema is shown in graphical format using entity-relationship (E-R) or object

Advantages : Only the database server requires processing power sufficient to handle the database, and the database is stored on the server, not on the clients. The database server can be tuned to optimize database processing performance. Less data are sent across the LAN, the communication load is reduced. User authorization, integrity checking, data dictionary maintenance, and query and update processing are all performed at one location, on the database server. The use of stored procedures, modules of code that implement application logic, which are included on the database server, pushed the database server architecture toward being able to handle more critical business applications. Stored procedures have the following advantages: Performance improves for compiled SQL statements.

UNIT V

114

Question Bank

115

modeling notations. We have called this type of conceptual schema a data model.
Program Report Definition Program Display Definition

User View 1
(Report)

User View 2
(Display)

Program Transaction Definition

specifications to the database technology to allocate and mange physical secondary storage space where data are to be stored and accessed. 5. Write about Three-tier architecture for database development? Ans: A three-tier architecture includes another server layer in addition to the client and database server layers. The additional server in a three-tier architecture may be used for different purposes. Often application programs reside on the additional server, in which case it is referred to as an application server. Or the additional server may hold a local database while another server holds the enterprise database. Each of these configurations is likely to be referred to as a three-tier architecture, but the functionality of each differs, and each is appropriate for a different situation. Advantages of the three-tier architectures are: 1) Scalability : Three-tier architectures are more scalable than two-tier architectures. For example, the middle tier can be used to reduce the load of a database server by using a transaction processing monitor to reduce the number of connections to a server, and additional application servers can be added to distribute application processing. 2) Technological flexibility : It is easier to change DBMS engines, though triggers and stored procedures will need to be rewritten, with a three-tier architecture. The middle tier can even be moved to a different platform. Simplified presentation services make it easier to implement various desired interfaces such as web browsers. 3) Lower long-term cost : Use of off-the-shelf components or services in the middle tier can reduce costs, as can substitution of modules within an application rather than an entire application.

User View N
(Transaction)

Conceptual Schema

Physical Schema

Fig. Three Schema Database Architecture iv) The specification for the conceptual schema are stored as metadata in a repository or data dictionary. 2) External Schema : i) A External Schema is a logical description of some portion of the database that is required by a user to perform some task. ii) A User Views or External Schemas is independent of database technology but typically contains a subset of the associated conceptual schema, relevant to a particular user or group of users. iii) A User View is defined in both logical terms as well as programming language terms. iv) A logical version of a user view can be represented as an E-R or object diagram or as relations. 3) Physical Schema : i) A Physical Schema contains the specifications fro how data from a conceptual schema are stored in a computers secondary storage. ii) Database Analysts & designers provides the definitions of the physical database, which provides all

UNIT V
Client Client

116
Client

Question Bank

117

LAN Application Server

Database Server

DATA

2) Tools and Training : Because three-tier architectures are relatively new, tools for implementing them are not yet well developed. Also, because training programs are not yet widely available, companies must develop skills for their implementation in-house. 3) Experience : Similarly, few people have as yet had experience building three-tier systems. 4) Incompatible standards : Few standards have as yet been proposed for transaction processing monitors. It is not yet clear which of the several competing standards proposed for distributed objects will prevail. 5) Lack of end-user tools that work with middletier services : Widely available generic tools such as end-user spreadsheets and reporting tools do not yet operate through middle-tier services. 6. What are the Data security risks? Explain Ans: In Database management Security is very important. Information stored in a database is very valuable and sensitive. So the management system should be protected from unauthorized access and updates. In case of single users it is necessary that the machine is kept secure from other users. Commercial database management systems have many users. Many people will be accessing it at the same time to get information. Some may delete the information they dont need while others may add more information. The security mechanism of the database management system(DBMS) distinguishes between the authorized and unauthorized users. All DBMS provide a comprehensive discretionary access control. This control regulates all user access to named objects through privileges. A privilege is nothing but permission granted to access a named object. Because privileges are granted to users at the discretion of other users, this is called discretionary security. The following are the data Security Risks.

Fig : 3-Tier Architecture 4) Better match of systems to business needs : New modules can be built to support specific business needs rather than building more general, complete applications. 5) Improved customer service : Multiple interfaces on different clients can access the same business processes. 6) Competitive advantages : The ability to react to business changes quickly by changing small modules of code rather than entire applications can be used to gain a competitive advantages. 7) Reduced risk : Again, the ability to implement small modules of code quickly and combine them with code purchased from vendors limits the risk assumed with a largescale development project. Three-tier and n-tier architectures are the most recent developments in client/server approaches. Challenges associated with moving to a three-tier or more complicated environment include : 1) High Short-Term Tools : Implementing a three-tier architecture requires that the presentation component be split from the process component.

UNIT V

118

Question Bank

119

The risks adversely effect the smooth and efficient functioning of the organization. The risk may be caused by persons, circumstances, unauthorized users, external factors listening in on the network and internal users giving away the store. 1. Data Tampering :- The data communicated should not be modified or viewed in transit. A malicious third party may tamper the data while it is moving between the sites. An unauthorized party on the network may intercept the data in transit and change parts of that data before retransmitting it. An example of this is changing the dollar a mount of a banking transaction from Rs. 100/- to Rs. 10000/-. 2.Evesdropping and data Theft :- Data must be stored and transmitted securely. Steps should be taken that credit card numbers can not be stolen. Both public and private network owners often route portions of their network through insecure landlines, extremely vulnerable microwave and satellite links or a number of servers. This situation makes it possible that valuable data is open to view any interested party. Within a building or campus, insiders with access can potentially view data not meant for them. Network sniffers can be installed to prevent eavesdropping on network traffic. 3. Falsifying User Identities :- One must know his users, otherwise it will be very hard to identify the culprits. Criminals attempt to steal users credit card numbers and use them as their own. They may seal other personal data like bank account numbers, driving license numbers and open bogus credit accounts. There is possibility for the digital signatures of users being stolen. 4. Password Related Threats :- In large systems users have to remember a number of passwords for different applications and services they use. Some passwords are vulnerable to dictionary attacks. Standardized passwords can be slightly changed and can be derived from known passwords. People who use complex passwords often write them down. The attackers find them easily. It is also likely

that the users may forget them and that needs costly administration and support efforts. 5. Unauthorized access to tables, columns and data rows:- The database contains many confidential tables or columns in a table. It should not be available to an authorized users indiscriminately. They should be protected at the column level. Some data rows may contain confidential information that should not be available indiscriminately to all authorized users. 6. Lack of accountability :- The systems operator should be able to track the users activities. Otherwise the users can not be held responsible for their actions. There must be some reliable way to know who is performing what operations on the data. Conclusion: Systems often support thousands and lacks of users. So they must be measurable. In such large-scale environments, the problem of managing user accounts and passwords makes the system vulnerable to error and attack. Administration of such large number of users are difficult for a single system. This burden increases when security is to be maintained on multiple systems. So meet the challenges of scale in security administration one must be able to centrally manage users and privileges using a directory based on industry standards. This can reduce costs of system management and increase business efficiency. 7. What are the Data Security Requirements? Explain Ans: Every organization requires technology that ensures a secure computing environment. Technology can not solve all problems. But most of the security issues can be solved using proper technology. The basic security standards which technology can surely provide are confidentiality, integrity and availability. 1. Confidentiality :- Confidentiality involves privacy of communications, sage storage of important data. Authenticated users and granular access control. A secure

UNIT V

120

Question Bank

121

system allows individuals to see only the data they are expected to see. 2. Privacy of communications :- The DBSM must control the spread of confidential personal matters such as health, employment, credit records etc. Corporate data like trade secrets, proprietary information about products and processes, competitive analysis, sales plans etc. should be kept away from the reach of the unauthorized users. 3. Secure storage of sensitive data:- Once the confidential data is has been entered, its integrity and privacy must be protected on the databases and servers wherein it stored. 4.Authenticated users:- Authentication is a way of implementing decisions as to whom to trust and not to trust. By this imposters can be eliminated. 5. Granular access control:- How much data a particular user can be allowed to see? Access control is the ability to hide portions of the database. There is difference between authentication and authorization. Authentication is the process by which the identity of the user is verified. Authorization is the process by which the user is given certain privileges. Access control is the process by which the users access to the data in the application is limited based on his privileges. 6. Integrity :- In a secure system the data contained is valid. Data integrity means that data is protected from deletion and corruption either when it is stored in the database or in transmission over the network. Integrity has many aspects 1) Only authorized users can alter the data. 2) Database must be protected against viruses that have been designed to corrupt the data. 3) Network traffic must be protected from deletion, corruption and eavesdropping. 7. Availability :- A secure system should make data available to authorized users without break. A secure system must be designed to ward off situations which might put the system out of commission system performance must remain adequate irrespective of the number of users.

Conclusion : The administrators of systems must have adequate means of managing the user population. They may use a directory to enable this. The security implementation must not diminish the ability of valid users in getting their work one. 8. Explain the dimensions of Database Security? Ans: For the protection of all the elements of complex computer systems, security measures should be taken in many dimensions. 1. Hardware or physical infrastructure:- The hardware could be damaged for a number of reasons like variations in the voltage flow of power, natural calamities, sabotage by antisocial elements, etc. The computer and the other equipment should be beyond the reach of the unauthorized users. The equipment can be damaged due to electronic interference or radiation. So they must be kept in a sage physical environments. 2. DBMS and Application software:- The DBMS can be damaged by the unauthorized users. They can corrupt or delete the data in the databases. The application programs can be altered or damaged. The programs and data can be stolen and can be used against the user. If security mechanism is not good or if it fails or gives access to unauthorized users. 3. Personnel :- There are a number of people involved in the DBMS and the databases, like database administrators, application administrators, application developers etc. if they want any of them can damage the system. Wrong people can get access to the DBAs. If the security officers are not good, security measures, policies they design and implement may not be good enough to prevent attacks on the database. Application programmers can create trap doors in their programs, which could be used for getting unauthorized entry to the database. They can alter programs or develop programs that are not secure users can give away their Ids and

UNIT V

122

Question Bank

123

Passwords either intentionally or unintentionally. They could access, see and copy confidential data. The people working the system and data security at the site must be reliable. Before making hiring decisions one may have to do background checks on DBAs. 4. Procedural:- The procedures adopted in operating the system must be sure of the reliability of the data. There should be good security policies and efficient people to implement it. The companys employees should be given enough training in database security matters. There should be auditing from time to time to find out whether the employees are implementing the companys database security measures,. Specific security risks should be identified clearly and the solutions should exactly suit the problems. A physical problem can not be solved by a technical solution. The work environment must be made secure. 5. Technical:- Storage, access, manipulation and transmission of data must be safeguarded by technology that uses the particular information control policies. Conclusion : There can be some more factors that may interfere with the security issues. They must be attended to in time and efficient by enough. 9. Explain the different types of database users. Ans: There is a hierarchy of users of the database. The ordinary user is at the bottom. The super user is at the top most level. The number of privileges each user has depends on his position in the hierarchy. Types of Database users :- The types of database users and their duties change from the environment to the other. Small corporations have a single database administrator who uses the database for application developers and users. In very large corporations the duties of data administrators are divided among many people and among several areas of specialization. The main users of database are database administrators, security officers, network administrators,

application developers, application administrators and database users. Database Administrations:- Each database at least needs one administrator. If the management system is very large it has many administrators. When there are many DBAs they share responsibilities. The following are the tasks of DBAs. 1. Installation and upgradation of DBMS application tools. 2. Present storage and future storage requirements for the database. 3. Creation of database storage structures. 4. After the application developers, creation of primary objects. 5. Modification of database structure. 6. Enrollment of users and maintenance of security. 7. Controlling and monitoring user access. 8. Planning for backup and recovery of database Information. 9. Maintaining achieved data on the tape. 10. Contacting DBMS vendor for technical support. Security Officers:- These people enroll users, control and monitor user access to the database and maintain security. Network Administrators :- They perform the network related tasks, manage the distributed databases and administer the networking products. Application Developers :- They design and implement database applications. They design the database structure and estimate the storage requirements for application. They modifications in the database structure and inform the DBAs. They tune the application during development and establish applications security measures. Application Administrators :- Database system has many applications. Each application can have an administrator. Database Users :- They question the database for decision making and other related activities. They perform

UNIT V

124

Question Bank

125

data entry and modifications and deletions in the present data. They generate reports and charts from the data. Conclusion :- These are all activities of the personnel employed. The working of the whole system depends on their reliability and efficiency. 10. What are the Transaction States? Explain Ans: Transactions are said to be completed successfully when there are no failures. A transaction that completes its execution is said to be committed. A committed transaction that how performed updates transforms the database in to a new consistent state. This state must continue even if there is a system failure. A transaction may not be successful all the times, unsuccessful transactions are aborted transactions. Changes made in the database by an aborted transaction should be reversed or undone. When all changes are made it is said that transaction has rolled back. Once a transaction is committed, the changes made by the transaction by rolling back the transaction. The only way to undo the effects of a committed transaction is to execute a compensating transaction. Compensating transaction can be a complex problem. It is not handled by the DBMS, only the user has to attend to it. A transaction must be of the following states. 1. Active :- This is the initial state. The transaction stays in this state while in execution. 2. Partially committed :- A transaction is in this state when it has executed the final statement. 3. Failed :- A transaction is in this state once the normal execution of the transaction can not proceed. 4. Aborted :- A transaction is said to be aborted when the transaction has rolled back and the database is being restored to the consistent state prior to the start of the transaction. The different transaction states are shown in the figure below :

Database (Initial State)

Database (After applying the updates)

Partially Committed Active Failed

Committed

Aborted

Database (Restored to Initial State)

5. Committed :- A transaction is in the committed state once it has been successfully executed and the database is transformed in to a new consistent state. A transaction that is either committed or aborted is said to be terminated. A transaction starts in the active state. A transaction contains a group of statements which form a logical unit of work. When the transaction has finished executing the last statement it enters the partially committed state. Through the transaction is complete at this state, it is likely that it has to be aborted because the actual output may still be in the main memory and a hardware failure can prevent the successful completion. Then the database writes enough information to the disk-updates the database or writes enough information in to the log files. When the last information is written, the transaction enters the committed state. Hardware failures or logical errors may cause not execution of the transaction. Such a transaction should be rolled back and then transaction enters the aborted stage.

UNIT V

126

Question Bank

127

When the transaction aborts, the systems has only two options. (1) Restart the transaction and (2) Kill the transaction.

even if there are system failures. The effects of a successfully completed transaction are permanently recorded in the database and must not be lost because of a subsequent failure. 12. Explain what are causes of Failures of database? Ans: Database failures are caused by hardware failures, software failures, failures due to human intervention and environmental conditions like earth quakes, power surges and abnormal temperature conditions. Some failures may cause the database to go down. Some other failures may be trivial. Some common causes of failures of the database are the following : 1. System crashes :- This can be due to software and hardware errors that cause the loss of main memory. 2. User Error :- A user may unknowingly delete a row or drop a table. 3. Carelessness :- The operators may cause the destruction of data or facilities, because they are not concentrating on the work at hand. 4. Sabotage :- This is the international corruption or destruction of the data, hardware and software facilities. 5. Statement failure :- This can be defined as the inability of the database to execute a SQL statement. While running a user program, a transaction may have multiple statements and one of the statements may fail due to various reasons. In such cases the user may simply re-execute the statement after correcting the problem. 6. Application software errors :- They include logical errors in the program that is accessing the database which causes one or more transactions to fail. 7. Network Failure :- This failure can occur while using a client-server or a distributed database system where multiple database servers are connected by communication software failures will interrupt the normal operation of the database system. In this case also rectification of the failure varies from product to product.

11. What are transactions? Explain the transaction properties? OR Explain what are the ACID Properties Ans: Collection of operations that form a single logical unit of work are called transactions. A transaction is a unit of database activities that accesses and possibly updates various data items. The transaction is usually the manipulation language or programming language. Statements or functions calls of the form begin transaction transaction statements end transaction usually delimit a transaction. The transaction consists of all the operations executed between the beginning and end of the transaction. Transaction Properties:- The DBMS should maintain the following transaction properties to ensure data integrity. They are atomicity, consistency, isolation and durability. These properties are often referred to as ACID properties. Atomicity : Atomicity means that all properties of transactions are reflected properly in the Database. This is also known as the all or nothing property. A transaction is considered as an indivisible unit and is performed either in its entirety or not performed at all. Consistency : Consistency means that execution of a transaction in isolation preserves the consistency of the database. A transaction must transform the database from one consistent state to another consistent state. Isolation : In reality multiple transactions are executed concurrently by the DBMS, But the system guarantees that each transaction is unaware of the other transactions executing at one and the same time in the system. Durability : After is transaction is successfully completed, the changes it has made to the database persist

UNIT V

128

Question Bank

129

8. Media failure :- These are most dangerous failures. It causes potential loss of data. It takes more time to cover than with other kinds of failures. The DBAs experience is very important in determining the kind of media recovery procedure to use to bring up the database quickly with little or no data loss. Every DBA must plan appropriate backup procedures to protect from media failures. 9. Natural and Physical disasters :- Natural disasters like earthquakes, floods, fires, power failures etc. cause damage to both the hardware and the software. Conclusion :- As there are many causes for database failures backup programs are highly necessary to keep the system going on. 13. What are recovery facilities? Explain Ans: Database recovery is the process by which the database is restored to the correct state in the event of a failure. Database recovery services provided by the DBMS to ensure that the database is reliable and remains in constant state in case of a failure. The following are the recovery facilities. 1. Backup mechanism:- The DBMS should provide a mechanism to create backup copies of the database and the log files to be created at regular intervals with out first having to stop the system. The backup copy of the database can be used to recover the database in the event of damage or destruction of the database. The backups are stored on an offline-storage like magnetic tape. 2. Logging :- To keep track of the transactions special files called log files or journals that contain information about all updates to the database. The log file contains information like transaction identifier, type of the log record, identifier of the data item effected by the database action, before image of the data item, after-image of the data item, log management information, check point records etc. As the information contained in the log files is critical for the database recovery two or three separate copies are maintained. Hither to log files were stored on magnetic tapes

but now they are stored on line on a first direct storage device (DASD). 3. Check Pointing :- To limit the amount of search time and subsequent processing, check pointing is used. A check point is a point of synchronization between the database and transaction log file. Check points also are called savepoints. All buffers are force written to secondary storage at the check point. The check point record contains the identifiers of all transactions that are active at the time of the check point. If the transactions are executed serially, when a failure occurs, we check the log file to find the transaction that started before the last check point. If the transactions are performed concurrently we will have to redo all transactions that have committed since the check point and undo all transactions that were active at the time of the failure. 14. Write about the different Recovery Techniques? Ans: The recovery technique to be used depends on the extent of damage caused to the database. If the database has been heavily damaged, the last backup copy will have to be restored and the update operations performed on the database because the last backup have to be reapplied using a log file. If the database has not been physically damaged, but has become inconsistent, it is enough to undo the changes that caused the inconsistency. It may also be necessary to redo some transactions to ensure that the updates they performed have reached the database. Here we do not need the backup copy of the database. We can restore the database to a consistent state by using the before and after images held in the log file. There are many techniques to bring back a database to a consistent state. 1. Immediate Update :- This technique is also known as the undo/redo algorithm. A variation of the algorithm where updates are recorded in the database before a transaction commits, requires only undo. In this technique,

UNIT V

130

Question Bank

131

the database may be updated by some operations of a transaction before the transaction reaches its summit point. These operations are recorded in the log on disk by forewriting before they are applied to the database. If a transaction fails after recording changes in the database but before reaching the commit point, the effect of the transaction on the database must be undone. So in the case of immediate update technique both undo and redo operations are required during recovery. 2. Deferred Update:- The deferred update techniques do not physically update the database on disk until after a transaction reaches its commit point. Then the updates are recorded on the database. Before reaching commit, all transaction updates are recorded in the buffers. During commit, the updates are first recorded in the log and then written to the database. If a transaction fails before reaching its commit point, it will not have changed the database and so undo is not needed. It may be necessary to redo the effect of the operations of a committed transaction from the log because their effect may not yet have been recorded in the database. So this technique is also known as the no undo/redo algorithm. 3. Shadow Paging:- Transaction logs are not necessary in this technique. Two directories for each database page are created during the life of a transaction the current directory and the shadow directory. When the transaction starts both the directories are the same. The shadow directory is never changed during the transaction and the current directory is updated when the transaction performs a write operation. When a transaction commits, the shadow directory is discarded and the current directory becomes the database page directory. If the transaction aborts the current directory is discarded. 4. Detached Transaction Actions:- Detached transaction actions do not effect the database. If the transaction fails the batch jobs are cancelled.

5. Multi-database Systems:- Recovery mechanism in a multi-database environment is using a protocol called two phase commit (2PC) protocol. The 2PC protocol ensure that either all the participating database commit the effect of the transaction or none of them do. If any of the participants or the coordinator fails, it is possible to recover to a state where either the transactions is committed or it is rolled back. 6. Catastrophic Failures:- There can be catastrophic database failures like disk crashes. Here the secondary storage where the database and the log files are written gets damaged. The main technique used to handle such crashes is the use of a database backup. In case of catastrophic system failure, the backup copy is restored and the system can be restarted. Conclusion :- The above narrated are the recovery technique mechanisms used to restore the database to its usual state. 15. Write about concurrency control schemes? Ans: Concurrency control is the process of managing simultaneous operations like queries, updates, inserts, deletes etc. on the database without having them interfere with one another. Data stored in the database can be shared by different users and applications, users can access shared data concurrently. If two or more users are accessing the database at one and the same time and if at least one user is updating the data, there can be interferences that might cause inconsistencies. The objective of concurrency control is like the objective of multi user computer systems where many users per form different operations at the same time. In these systems different users are allowed to perform different actions at one and the same time due to a concept called multi-programming. This programming allows two or more programs or transactions to execute at the same time. By running more than one transaction concurrently we can improve the computer system performance, but when we perform concurrent operations on a database there can be

UNIT V

132

Question Bank

133

serious problems, such as loss of data integrity and consistency if the execution is not properly managed, scheduled and organized. Some problems arising from concurrent execution of transactions include multiple update problem, uncommitted dependency problem, incorrect analysts problems etc. 1. Multiple update problem :- In this problem data written in one transaction may be over written by another update transaction. The over writings reset it in wrong calculations. This can be avoided by preventing transaction II from reading the value of the account balance until the update by transaction I has been completed. 2. Uncommitted dependency problem:- This problem occurs when one transaction is allowed to see the intermediate results of another transaction before it is committed. This problem can be avoided by preventing transaction II from the reading the account balance until transaction I is terminated i.e. either terminated or rolled back. 3. Incorrect Analysis Problem:- the above two problems occur when concurrent transactions are updating the database. But problems could arise also when transaction is not updating the database. Transactions that read the database can also produce wrong results. If they are allowed to read when the database is in an inconsistent state. This problem is often called dirty read or unrepeatable read. Dirty read occurs when a transaction reads many values from the database while other transactions are updating those values. This problem is solved by preventing the transaction I from reading the account balances until all transactions that update the accounts are completed. Conclusion :- The objective of concurrency control is to schedule or arrange transactions in such a way as to avoid any interference. One way of avoiding interference is to execute transactions one after the other. But in a multi-user environment the serial execution of transactions is not a viable option.

16. What are the concurrency control Techniques? Explain Ans: In practice, a DBMS does not test for serializability of a schedule. Operations from concurrent transactions are performed by the operating system. The DBMS uses protocols that are known to produce serialisable schedules. Two of the most popular protocols are locking and timestamping. Locking :- Locking is one of the most popular mechanisms used to ensure serialisability. Locking is a procedure used to control concurrent access to data. Locking works in the following way. A transaction must obtain a read or write lock on a data item before it can perform a read or write operation. (Read lock is also called shared lock write lock is also known as exclusive lock). The fundamental rules for locking are :1. If a transaction has a read lock on the data item it can only read the item but not update it. 2. If a transaction has a read lock on a data item, other transactions can obtain a read lock on the data item, but no write locks. 3. If a transaction has a write lock on a data item it can both read and update the data item. 4. If a transaction has a write lock on a data item, other transactions can not obtain either a read lock or a write lock on the data item. It means when a transaction acquires a write lock on a data item, it gains exclusive control over the data item. For example if a transaction has obtained a read lock, on a data item, another transaction can also take place on the same data item. But if the transaction has of trained a write lock, no other transaction can access the data item until the lock is released. Almost all database objects starting from a field, row, page, table to the entire database could be locked depending on the type of lock obtained by the transaction.

UNIT V

134

Question Bank

135

Time stamping :- This is another method used for concurrency control. This method has no locks or dead locks. This method does not make transactions wait. Transactions involved in a conflict are simply, rolled back and restarted. A time stamp is a unique identifier created by the DBMS that indicates the starting time of a transaction. Time stamps are generated either using the systems clock or by incrementing a logical counter every time a new transaction starts. Time stamping is a concurrency control protocol in the fundamental goal is to order transactions globally in such a way that order transactions get priority in the event of a conflict. Conclusion : Concurrency control is necessary when many users allowed to access the database at one and the same time. Without concurrency control there would be many problems like multiple update, uncommitted dependency, incorrect analysis etc. Serialisation means executing the transactions one after the other. This is like people queuing up at a water tap or a bus stop or at payment counters etc 17. Write about the COMMIT, ROLLBACK and SAVEPOINT Commands. Ans: COMMIT : This command is used to save changes made by a transaction to the database. The syntax of the COMMIT command. COMMIT [work]; The keyword COMMIT is the only mandatory part of the syntax. The word work is optional and has no meaning and has not effect on the execution. But it is required by SQL standard. ROLLBACK Command :- This is a control command to undo the transactions that have already not been made. This command can be issued to undo the changes since the last COMMIT or ROLLBACK. The syntax of ROLLBACK is :

ROLLBACK [work]; Here also ROLLBACK is the only mandatory part of the syntax. Work is optional and has no meaning. But is required by the SQL standard. SAVEPOINT Command :- Savepoint is also known as Synepoint or check point. It is a point in the transaction that you can rollback to without rolling back the whole transaction. Save points can be established inside the transaction and while the transaction is being executed, issue either COMMIT or ROLLBACK command to commit or discard the changes up to a particular save point. The syntax of the SAVEPOINT command is: SAVEPOINT savepoint-name Example: Suppose you have a transaction which has five delete operations. After every deletion you have established a save point with name. Say SP-1 to SP-5. After each deletion you have created save points also as shown below: SAVEPOINT SP1 DELETE FROM emp WHERE empno=100 SAVEPOINT SP2 DELETE FROM emp WHERE empno=110 SAVEPOINT SP3 DELETE FROM emp WHERE empno=121 SAVEPOINT SP4 DELETE FROM emp WHERE empno=101 SAVEPOINT SP5 DELETE FROM emp WHERE empno=111 Now after 5 deletions suppose you change your mind and you dont want the last two deletions. You can rollback to save point SP4. This is done by the command ROLLBACK to SP4. Then if you issue COMMIT command only the first three deletions will be committed. You find employees with

UNIT V

136

Question Bank

137

numbers 101 and 111 still exist, if they are not deleted by some body. Conclusion : SQL commands COMMIT, ROLLBACK and SAVEPOINT helps in managing the transactions. 18. Discuss about Data Encryption? Ans: Encryption is a technique of encoding data, so that only authorized users can understand it. Encryption alone is not sufficient to secure your data. Protecting data in the database includes access control, data integrity, encryption, and auditing. For certain applications, you may decide to encrypt data as an additional measure of security. Most issues of data security can be handled by appropriate authentication and access control, ensuring that only properly identified and authorized users can access data. Data in the database, cannot normally be secured against the database administrator's access. Since a DBA has all privileges. Similarly, organizations may have concerns about securing sensitive data stored off-line, such as backup files stored with a third party. They may want to guard against intruders accessing the data where it is physically stored on the database. Although encryption is not a substitute for effective access control, you can obtain an additional measure of security by selectively encrypting sensitive data before it is stored in the database. Information that may be especially sensitive and warrant encryption could include credit card numbers, or trade secrets, such as industrial formulas. A number of industry-standard encryption algorithms are useful for the encryption and decryption of data on the server. Two of the most popular are Data Encryption Standard (DES) and Triple DES (3DES). DES provides standards-based encryption for data privacy while 3DES encrypts message data with three passes of the DES algorithm.

19. Explain in detail Authentication of Users to the Database. Ans: A basic security requirement is that you know your users; you must first identify users before you can determine their privileges and access rights. You must know who a user is so that you can audit his or her actions upon the data. Users can be authenticated in a number of different ways before they are allowed to create a database session. In database authentication, you can define users such that the database performs both identification and authentication of users. In external authentication, you can define users such that authentication is performed by the operating system or network service. Alternatively, you can define users such that they are authenticated by the Secure Sockets Layer (SSL). For enterprise users, an enterprise directory can be used to authorize their access to the database through enterprise roles. Passwords are one of the basic forms of authentication. A user must provide the correct password when establishing a connection to prevent unauthorized use of the database. In this way, users attempting to connect to a database can be authenticated by using information stored in that database. Passwords are assigned when users are created. A database can store a user's password in the data dictionary in an encrypted format. Users can change their passwords at any time. Database security systems that are dependent on passwords require that passwords be kept secret at all times. But, passwords are vulnerable to theft, forgery, and misuse. The DBA or security offices can give training and guidelines to the users on how to create and manage passwords. It is often lack of knowledge on how to handle passwords that results in password related database security breaches. 20. What are the types of Integrity Constraints? Explain Ans: Data Integrity is correct and complete data in the database.

UNIT V

138

Question Bank

139

Related database is a collection of related tables which contain related information, tables that are connected by foreign key relationships and tables that are part of the same physical entity. Using statements such as INSERT, DELETE, or UPDATE the contents of the database can be modified. Then the integrity of the data is lost. Integrity may be lost in many ways :1. Invalid data may be added to the database 2. Changes in the database may get lost due to power failure or a system crash. 3. Existing data may be modified to an incorrect value. 4. Programs which update data could abort half way, leaving the database in a partly modified state. 5. Parent rows could get deleted leaving the child rows intact. The most important function of the RDBMS is to preserve the integrity of the data contained in it. To preserve data integrity the RDBMS imposes what is known as data integrity constraints. There are different types of constraints like entity integrity, referential integrity etc. These constraints restrict the data values which can be inserted into the database, the value that could be deleted and values that could be modified. The integrity constraints have mechanisms which will ensure that such columns will contain a valid value when an INSERT or UPDATE operation is performed that much columns will contain a valid value when an INSERT or UPDATE operation is performed against the table. All constraints have a name. If the user does not specify a name, the system implicitly provides a name to the constraint. It is good practice to explicitly specify the name of the constraint as it helps in understanding the error message that the system will provide when a constraint is violated. Types of Integrity Constraints : Integrity constraitns can be classified as general constraints, domain constraints, and base table constraints.

BASE TABLE constrains include the column constraints also. GENERAL Constraints apply to combinations of columns in combination of base tables. DOMAIN Constraints are associated with a specific domain and apply to every column that is defined in that domain. BASE TABLE constraints are associated with some specific table. COLUMN constraints are specific to a column in a base table. 21. What are the restrictions on integrity constraints? Explain Ans: Certain restrictions should be kept in mind when creating the integrity constraints. They are the following: 1. If the constraint includes an aggregate function reference, that reference must be contained within a subquery. 2. The constraint can not include any references to parameters or host variables because the constraints are independent of specific applications. 3. The constraint can not use any reference to any of the functions like USER, CURRENT_USER, SESSION_USER, SYSTEM_USER, CURRENT_DATE, CURRENT_TIME, CURRENT_TIME, CURRENT_TIMESTAMP etc. The reason for this restriction is that such references will return different values on different users. General Constraints :- This constraint applies to combinations of columns in combinations of base tables. These constraints are created using the CREATE ASSERTION statement. Syntax of the statement : CREATE ASSERTION name CHECK (conditional expression);

UNIT V

140

Question Bank

141

DOMAIN Constraints: These can be specified by means of the CREATE DOMAIN statement and can be added to or dropped from existing domain by means of the ALTER DOMAIN statement. Syntax of the statement CREATE DOMAIN domain-name [AS] data-type [default-definition] (domain-constraint-definition-list) The default definition has the following syntax : DEFAULT {literal | niladic-function | NULL} The niladic-function can be any of the following : USER, CURRENT_USER, SESSION_USER, SYSTEM_USER, CURRENT_DATE, CURRENT_TIME and CURRENT_ TIMESTAMP. The domain-constraint-definition list has the following syntax: [CONSTRAINT constraint-name] CHECK (Conditional-expression) ALTER DOMAIN statement allows to modify or delete a constraint. The following is the syntax: ALTER DOMAIN domainname ADD domain-constraint-definition | DROP CONSTRAINT constraint-name Base Table Constraints :- These are associated with a specific base table. This can also refer to other base tables. Foreign key constraints refer to other base tables. CREATE TABLE syntax CREATE TABLE table-name (base-table-element-definition) A base table constraint definition can be any one of the following: Candidate key definition Foreign key definition Check constraint definition. Column constraint. Column Constraints :- The base table constraints apply to the entire table while the column constraints apply to the single column within a single base table. The column

constraint can be part of the column definition. The following is the syntax : Column name(data-type/domain) DEFAULT (literal/niladic function/NULL) (column-constraint-definition-list) A column constraint definition can be any of the following: NOT NULL PRIMARY KEY or UNIQUE Reference definition CHECK constraint definition. There are two additional features of the structured query language (SQL) standard that fall under the category of integrity constraints-data type checking and the check option. In the case of data type checking SQL will reject any attempt to violate the data type specification on INSERT or UPDATE. 22. Explain the causes of database failures? Ans: Some failures might cause the database to go down, some others might be trivial. Similarly, on the recovery side, some recovery procedures require DBA intervention, whereas some of the internal recovery mechanisms are transparent to the DBA. Some common causes of failures include. System Crashes User Error Carelessness Sabotage Statement failure Application Software errors Network failure Media failure Natural Physical Disasters System Crashes can be due to hardware or software errors resulting in loss of main memory. An example of a user error is a user inadvertently deleting a row or dropping a

UNIT V

142

table. Carelessness is the destruction of data or facilities by operators or users because they were not concentrating on the task at hand. Sabotage is the intentional corruption or destruction of data, hardware or software facilities. A Statement failure can be defined as the inability by the database to execute an SQL statement. While running a user program, a transaction might have multiple statements and one of the statements might fail due to various reasons. Typical examples are selecting from a table that does not exist, or trying to do an insert and having the statement fail due to lack of space. Such statement failures normally generate error codes and messages by the application software or the operating system. Application software errors include logical errors in the program that is accessing the database, which causes on or more transactions to fail. Network Failures can occur while using a client-server configuration or a distributed database system where multiple database servers are connected by communication networks. Network failures such as communication software failures or aborted asynchronous connections will interrupt the normal operation of the database system. Media Failures are the most dangerous failures. Not only is there a potential to lose data if proper backup procedures are not followed, but it usually takes more time to recover than with other kinds of failures. A typical example of a media failure is a disk controller failure or disks head crash, which causes all, databases residing on that disk or disks to be lost. Natural and Physical Disasters are the damage caused to data, hardware and software due to natural disasters like fires, floods earthquakes, power failures, etc.

SHORT ANSWER QUESTIONS

UNIT I
1. Data : Data is a collection of facts, figures and statistics related to an object. Data can be processed to create useful information. Data is a valuable asset for an organization. Data can be used by the managers to perform effective and successful operations of management. It provides a view of past activities related to the rise and fall of an organization. It also enables the user to make better decision for future. Data is very useful for generating reports, graphs and statistics. 2. Information : The manipulated and processed form of data is called information. It is more meaningful than data. It is used for making decisions. Data is used as input for processing and information I output of this processing. 3. Information Processing : Information processing consists of locating and capturing information, using software to manipulate it into a desired form, and outputting the data. An Internet search engine is an example of an informationprocessing tool, as is any sophisticated information-retrieval system. 4. File : Group of records stored together for some common purpose. Large files are usually stored on computers. A file may consist of current customers, subscribers, or donors, or previous customers, subscribers, or donors. Each individual name on a file is contained in a unique record with information pertaining to that person
143

Short Answer Questions

144

Question Bank

145

5. What are different types of file organizations? Ans: There are three kinds of file organizations. They are 1) Sequential File organization (2) Random File Organization (3) Indexed Sequential File organization. 6. Demerits of File Processing System? Ans: Disadvantages of File Processing Systems include: 1. Program-Data Dependence. 2. Duplication of Data. 3. Limited data sharing. 4. Lengthy Development Times. 5. Excessive Program Maintenance. 6. Integrity Problem. 7. Inconsistance data 8. Security 7. Database : An organized collection of logically related data, usually designed to meet the information needs of multiple users in an organization. 8. Ranges of Database Approaches : The range of database applications can be divided into five categories : Personal Databases, Work-group databases, department databases, enterprise Databases and Internet, Intranet and Extranet databases. 9. What are the components of Database System Environment? Ans: The major components of a typical database environment are (1) Computer-aided software engineering tools (2) Repository (3) Database Management System (4) Database (5) Application programs (6) User Interface (7) Data administrators (8) System developers (9) End Users. 10. SDLC : he Software Development Lifecycle (SDLC) is a conceptual model used in project management that describes the stages involved in an information system

development project, from an initial feasibility study through maintenance of the completed application. 11. DDLC: Database Development Life Cycle (DDLC) is considered as the complete life cycle of a documentation task. Software documentation requires a well defined methodology for the successful completion. DDLC should encompass and go along with Software Development Life Cycle (SDLC) because both are parallel as well as intertwined.

UNIT II
12. What is Business Rule : A statement that defines or constrains some aspect of the business. It is intended to assert business structure or to control or influence the behaviour of the business. 13. E-R Model : An E-R model is a detailed, logical representation of the data for an organizations for a business area. The E-R model is expressed in terms of entities in the business environment, the relationships among those entities and the attributes of both the entities and their relationships. An E-R model is normally expressed as an Entity-relationship diagram, which is a graphical representation of an E-R Model. 14. Entity : A person, plane, object event, or concept in the user environment about which the organization wishes to maintain data. 15. Components of E-R Model : The E-R Model Consists of the following major components (1) Entity (2) Attributes (3) Relationships (4) Key attributes.

Short Answer Questions

146

Question Bank

147

16. E-R Model Symbols : E-R Model uses the following symbols : Rectangle Oval Diamond Representing Entity Sets Representing Attributes Represents entity sets relationships among

20. Types of Relationships : There are three types of relationships exists among entities. They are (a) One-to-One (b) One-to-Many (c) Many-to-Many. 21. Degree of Relationships : The degree of a relationship is the number of entity types that participate in that relationship. Thus the relationship IS-MARRIED-TO is of degree 2, since there are two entity types : MARRIED MAN and MARRIED WOMAN. The three most common relationship degrees in E-R Models are unary (degree 1), Binary (degree 2) and ternary (degree 3). Higher-degree relationships are possible, but they are rarely encountered in practice. 22. Cardinality Constraints : The cardinality Constraint specifies the number of instance of one entity that can be associated with each instance of another entity. 23. Generalization and Specialization : Generalization is the process of defining a more general entity type from a set of more specialized entity types. Thus generalization is a bottom-up process. Specialization is a top-down process. Specilization is the process of defining one or more subutypes of the supertype and forming supertype/subtype relationships. Each subtype is formed based on some distinguishing characteristic such as attributes or relationships specific to the subtype. 24. Normalization : Normalization is the process of decomposing relations with anomalies to produce smaller, well-structured relations. 25. First Normal Form (1NF) : A relation is in first normal form if it contains no multivalued attributes.

Line

Links attributes to entity sets and entity sets to relationships.

17. Attributes : Each entity can have a number of characteristics. The characteristics of an entity are called Attributes. For example, an entity, say Student, can have characteristics like Register Number, Name, Address, Date of birth, etc. 18. Entity Sets : An entity set is the collection of entities of the same type i.e., the entities which share common properties or attributes. For example, the set of all employees of an organization can be called as the entity set Employee. Similarly, the set of all persons who are customers at a given bank, can be called as the entity set Customer. 19. Relationship : A relationship links two entity sets. Consider the entity sets MARRIED MAN and MARRIED WOMAN. We can define the IS-MARRIED-TO relationship between these two sets by associating each married man with his wife. The IS-MARRIED-TO relationship consists of a set of married couples, the husband coming from the MARRIED MAN Entity set and the wife coming form the MARRIED WOMAN entity set.

Short Answer Questions

148

Question Bank

149

26. Second Normal Form (2NF) : A relation in first normal form in which every nonkey attribute is fully functionally dependent on the primary key. 27. Functional Dependency : A functional dependency is a constraint between two attributes or two sets of attributes. For any Relation R, attribute B is functionally dependent on attribute A if, for every valid instance of A, that value of A uniquely determines the value of B. The functional dependency of B on A is represented by an arrow, as follows : A B. An attribute may be functionally dependent on two or more attributes rather than on a single attribute. 28. Candidate Key : A candidate key is an attribute, or combination of attributes, that uniquely identifies a row in a relation. 29. Primary Key : A primary key is an attribute, or combination of attributes, that uniquely identifies each row in a relation. 30. Difference between Primary key and Candidate Key : Candidate key is a Unique Key and it can be used to find out any particular row(Tuple) in a table. A Primary key is also a Candidate Key. But these are the differences : (1) On a table we can have only 1 primary key but 'N' number of unique keys. (2) Unique key can be null but not a Primary key. 31. Super Key : A superkey is a combination of attributes that can be uniquely used to identify a database record. A table might have many superkeys. Candidate keys are a special subset of superkeys that do not have any extraneous information in them. 32. Foreign Key : A foreign key is a field (or fields) that points to the primary key of another table. The purpose of

the foreign key is to ensure referential integrity of the data. In other words, only values that are supposed to appear in the database are permitted. 33.Third Normal Form (3NF) : A relation that is in second normal form and has no transitive dependencies present. 34. Boyce-Codd Normal Form (BCNF): A relation is in Boyce-Codd Normal form if and only if every determinant in the relation is a candidate key. 35. Fourth Normal Form (4NF) : A relation in fourth normal form if it is in BCNF and contains no multivaluted dependencies. 24. Denormalization : The process of transforming normalized relations into unnormalized physical record specifications.

UNIT III
36. SQL : The structured query language is an industrystandard language used for manipulation of data in a relational database. The major SQL commands of interest to database users are SELECT, INSERT, JOIN and UPDATE. 37. Characteristics of SQL : SQL usage by its very nature
is extremely flexible. It uses a free form syntax that gives the user the ability to structure SQL statements in a way best suited to him. Each SQL request is parsed by the REBMS before execution, to check for proper syntax and to optimize the request. Unlike certain programming languages, there is no need to start SQL statements in a particular column or be finished in a single line. The same SQL request can be written in a variety of ways.

38. SQL Data Types : SQL several data types that are listed below :

Short Answer Questions

150

Question Bank

151

String

Char (n) Varchar2(n) Long Numeric Number(p, q) Integer(p) Float(p) Date/Time Date

Used for representing nonnumerical values. Used for representing Numerical values. Used for Representing Date Values

39. Literals : Literals are also called as constants. Constant is one whose value does not change in the course of the execution of the program. 40. Types of SQL Commands : SQL commands are broadly classified into three types. They are (a) Data Definition Language Commands (b) Data Manipulation Language commands (c) Data Control language Commands. 41. DDL : These commands used to define a database, including creating, altering, and dropping tables and establishing constraints. 42. DML : These commands are used to maintain and query a database, including updating, inserting, modifying, and querying data. 43. DCL : The commands used to control a database, including administering privileges and the committing of data. 44. Operator : Operator is a symbol which represents a particular action. Operators are operates on operands. Operand may be either constant or variable. 45. Comparison Operators : Comparison operators are used to compare the column data with specific values in a condition.

Comparison Operators are also used along with the SELECT statement to filter data based on specific conditions. The below table describes each comparison operator. Comparison Operators Description = equal to <>, != is not equal to < less than > greater than >= greater than or equal to <= less than or equal to 46. Set Operators : Set operators combine the results of two queries into a single result. Queries containing set operators are called compound queries. Table : SQL set operators. Operator Returns UNION All rows selected by either query. either query, UNION ALL All rows selected by including all duplicates.

INTERSECT All distinct rows selected by both queries. MINUS All distinct rows selected by the first query but not the second.

47. View : A VIEW is a virtual table, through which a selective portion of the data from one or more tables can be seen. Views do not contain data of their own. They are used to restrict access to the database or to hide data complexity. A view is stored as a SELECT statement in the database. DML operations on a view like INSERT, UPDATE, DELETE affects the data in the original table upon which the view is based. 48. Index : Index in sql is created on existing tables to retrieve the rows quickly.

Short Answer Questions

152

Question Bank

153

When there are thousands of records in a table, retrieving information will take a long time. Therefore indexes are created on columns which are accessed frequently, so that the information can be retrieved quickly. Indexes can be created on a single column or a group of columns. When a index is created, it first sorts the data and then it assigns a ROWID for each row. 49. Nulls : If a column in a table is optional, we can insert a new record or update an existing record without adding a value to this column. This means that the field will be saved with a NULL value. NULL values are treated differently from other values. NULL is used as a placeholder for unknown or inapplicable values. Note: It is not possible to compare NULL and 0; they are not equivalent. 50. BETWEEN clause : The Between keyword allows you to define a predicate in the form of a range. If a column value for a row falls within this range, then the predicate is true and the row will be added in the result table. The Between range test consists of two keywords, Between and AND. It must be supplied with the upper and the lower range values. The first value must be lower bound and the second value, the upper bound. 51. ORDER BY clause : the rows in the result table have not been ordered in any way. SQL just retrieved the rows in the order in which it found them in the table. Often, however, we need to list the output in a particular order. This could be in ascending order, in descending order, or could be based on either numerical value or text value. In such cases, we can use Order By clause to impose an order on the query results. The Order By keyword can only be used in Select statements.

52. Define Query : Queries are the primary mechanism for retrieving information from a database and consist of questions presented to the database in a predefined format. Many database management systems use the Structured Query Language (SQL) standard query format. 17. What is Sub Query : A subquery is a query that is nested inside a SELECT, INSERT, UPDATE, or DELETE statement or inside another subquery. A subquery can return a set of rows or just one row to its parent query. A scalar subquery is a query that returns exactly one value: a single row, with a single column. Scalar subqueries can be used in most places in a SQL statement where you could use an expression or a literal value.

to sub-queries where the parent query is executed based on the values returned by sub-queries. but when comes to co-related subqueries for every instance of parent query, the subquery is executed and based on the result of sub-query the parent query will display the record.
54. Aggregate Functions : Aggregate functions perform a calculation on a set of values and return a single value. Except for COUNT, aggregate functions ignore null values. Aggregate functions are frequently used with the GROUP BY clause of the SELECT statement. 55. Insert Statement : Data is added to tables by using the Insert statement. The Insert Into statement can be used to append a record to a table or to append multiple records from one table to another. The syntax for the Insert statement is :
INSERT INTO table_name (column1, column2,...) VALUES (value1, value2,....)

53. Define Correlated Subqueries : It is very similar

56. Update Statement : In daily use, a database is a constantly changing store of data. The SQL commands which

Short Answer Questions

154

Question Bank

155

are used to modify data that is already in the database are the Update and the Delete commands.The Update statement allows you to update a single record or multiple records in a table. The syntax for the Update statement is as follows:
UPDATE table_name SET column_name = expression WHERE conditions

corresponds to a problem or subproblem to be solved. Thus, PL/SQL supports the divide-and-conquer approach to problem solving called stepwise refinement. A PL/SQL block has three parts: a declarative part, an executable part, and an exception-handling part. The PL/SQL block structure is as follows :
[DECLARE -- declarations] BEGIN -- statements [EXCEPTION -- handlers] END;

57. Delete Statement : The Delete statement allows you to delete a single record or multiple records from a table. After you remove records using a Delete statement, you cannot undo the operation. To check which records will be deleted, examine the results of a Select query that uses the same criteria. It is also important to understand, that a Delete statement deletes entire records, not just data in specified fields. If you just want to delete certain fields, use an Update query that changes the value to Null. The syntax for the Delete statement is as follows:
DELETE FROM table_name WHERE conditions

60. PL/SQL Control Statements : Control structures are the most important PL/SQL extension to SQL. Not only does PL/SQL let you manipulate Oracle data, it lets you process the data using conditional, iterative, and sequential flow-ofcontrol statements such as IF-THEN-ELSE, CASE, FOR-LOOP, WHILE-LOOP, EXIT-WHEN, and GOTO. Collectively, these statements can handle any situation. 61. IF Statement : it is necessary to take alternative actions depending on circumstances. The IF statement lets you execute a sequence of statements conditionally. That is, whether the sequence is executed or not depends on the value of a condition. There are three forms of IF statements: IFTHEN, IF-THEN-ELSE, and IF-THEN-ELSIF. The CASE statement is a compact way to evaluate a single condition and choose between many alternative actions. 62. For Loop : FOR loops iterate over a specified range of integers. The range is part of an iteration scheme, which is enclosed by the keywords FOR and LOOP. A double dot (..) serves as the range operator. The syntax follows:
FOR counter IN [REVERSE] lower_bound..higher_bound LOOP sequence_of_statements END LOOP;

UNIT IV
58. PL/SQL : PL/SQL is the procedural language extension to the structured query language (SQL). It combines a database language with a procedural programming language, which is built on a basic unit called a block. By compiling and storing executable blocks, Oracle can process the PL/SQL quickly and easily. 59. PL/SQL Block Structure : PL/SQL is a blockstructured language. That is, the basic units that make up a PL/SQL program are logical blocks, which can contain any number of nested sub-blocks. Typically, each logical block

Short Answer Questions

156

Question Bank

157

The range is evaluated when the FOR loop is first entered and is never re-evaluated. 63. Cursor : A cursor is a temporary work area created in the system memory when a SQL statement is executed. A cursor contains information on a select statement and the rows of data accessed by it. This temporary work area is used to store the data retrieved from the database, and manipulate this data. A cursor can hold more than one row, but can process only one row at a time. The set of rows the cursor holds is called the active set. 64. Sub procedures : A stored procedure or in simple a proc is a named PL/SQL block which performs one or more specific task. This is similar to a procedure in other programming languages. A procedure has a header and a body. The header consists of the name of the procedure and the parameters or variables passed to the procedure. The body consists or declaration section, execution section and exception section similar to a general PL/SQL Block. A procedure is similar to an anonymous PL/SQL Block but it is named for repeated usage. 65. Functions : A function is a named PL/SQL Block which is similar to a procedure. The major difference between a procedure and a function is, a function must always return a value, but a procedure may or may not return a value. 66. Packages : PL/SQL lets you bundle logically related types, variables, cursors, and subprograms into a package. Each package is easy to understand and the interfaces between packages are simple, clear, and well defined. Packages usually have two parts: a specification and a body. The specification is the interface to your applications; it declares the types, constants, variables, exceptions, cursors,

and subprograms available for use. The body defines cursors and subprograms and so implements the specification. 67. Trigger : A trigger is a pl/sql block structure which is fired when a DML statements like Insert, Delete, Update is executed on a database table. A trigger is triggered automatically when an associated DML statement is executed. 68. Types of Triggers : There are two types of triggers based on the which level it is triggered. 1) Row level trigger - An event is triggered for each row upated, inserted or deleted. 2) Statement level trigger - An event is triggered for each sql statement executed. 69. Advantages of Triggers : Triggers is a special kind of procedure. The Main advantage of the trigger is automatic. Whenever the table affected by insert update or delete query that time the triggers will implicitely call.

UNIT V
70. Client/server architecture : In a client/server system, the server is a relatively large computer in a central location that manages a resource used by many people. When individuals need to use the resource, they connect over the network from their computers, or clients, to the server. Examples of servers are: Print servers File servers E-mail servers 71. Files server Architecture : File server is a computer accountable for the middle storage and organization of data files so that other hosts on the same network can contact the files. A file server permits users to distribute data over a

Short Answer Questions

158

Question Bank

159

network without having to actually transfer files by floppy diskette or several other outer storage device. Any machine can be set to be a host and act as a file server. In its easiest way a file server may be an normal PC that manages requests for files and transfers them over the network. 72. Data Security Risks : A database involves the following threats to its security : (a) An unauthorized user can get access to a database and damage its files or alter it. (b) A database user can intentionally bypass the security mechanisms and make unauthorized copies of secret data, for malicious purposes. (c) Improper usage of concurrent transaction processing can cause variations in the data values read and written, by two users at the same time. 73. Types of Database users : A database system primarily involves two types of uses, namely the database administrator (DBA) and database operator. The DBA has full access to a database and the database operators has access for data entry to the database. DBA usually is a single person but operators may be many. 74. ACID Properties : A transaction is expected to contain certain properties known as ACID test properties. These are (a) Atomicity (b) Consistency (c) Isolation (d) Durability. 75. Types of Failures : Transaction failures are two types (a) Logical error (b) System Error. a) Logical Error : When transaction can no longer continue with its normal execution because of some internal conditions such as bad input, data not found, data overflow etc. it gives incorrect results. b) System Error : When system enters in some undesirable state, then also the transaction cant continue its normal execution.

76. Recovery Facilities : Following are some DBMS recovery facilities : (a) Checkpoint Facility that enables updates to database be made permanent. (b) Recovery Manager module of the DBMS. It is the role of the Recovery Manager module, available in DBMS package, to guarantee durability and atomicity in case of unpredictable failures. 77. Commit Statement : Use the COMMIT statement to end your current transaction and make permanent all changes performed in the transaction. A transaction is a sequence of SQL statements that Oracle Database treats as a single unit. This statement also erases all savepoints in the transaction and releases transaction locks. Oracle Database issues an implicit COMMIT before and after any data definition language (DDL) statement. 78. Rollback Command : In recovery procedure, it is updating a prior valid copy of the database with the necessary changes to produce a current version. 79. Savepoint command : A SAVEPOINT is a marker within a transaction that allows for a partial rollback. As changes are made in a transaction, we can create SAVEPOINTs to mark different points within the transaction. If we encounter an error, we can rollback to a SAVEPOINT or all the way back to the beginning of the transaction. 80. Data Encryption : Confidential data may be stored in an encrypted form in a database. This encrypted data cannot be read by anyone, unless he knows how to decrypt it.

Lab Exercises

161

DBMS PRACTICAL
1. Write an SQL command to create a table client-master with the following constraints i) A CHECK constraint on the client no, so that client no. value must start with C. ii) A CHECK constraint on name so that name is entered in uppercase. iii)A CHECK constraint on city so that only the cities HYDERABAD, BOMBAY, NEW DELHI, MADRAS, CULCUTTA iv) Include the following columns with appropriate data types. Address, State, Balance. Table structure assumption : Field Name Type Width Client_No Varchar2 10 Client_Name Varchar2 30 City Varchar2 10 Address Varchar2 50 State Varchar2 24 Balance Number 15,2 Solution : CREATE TABLE Client (Client_No Varchar2(10) CONSTRAINT Client_no_cons CHECK(Client_No LIKE 'C%'), Client_Name Varchar2(30) CONSTRAINT Client_Name_cons CHECK (Client_Name=UPPER(Client_Name)), City Varchar2(10) CONSTRAINT City_cons CHECK(City IN ('HYDERABAD', 'BOMBAY', 'MADRAS', 'NEW DELHI', 'CULCUTTA')), Address Varchar2(50), State Varchar2(25), Balance Number(15,2)); For Input : INSERT INTO Client VALUES('&Client_No','&Client_Name','&City','&Address','&State',&Balance); 2. Create a view to get employee name and his manager name from the following table EMP. Emp-no Char(6) Emp-name Varchar2(15) Hiredate Date Mgr Char(6) foreign key emp(emp-no) Sal Number(6,2) Solution :

SELECT E.ENAME EMP_NAME,M.ENAME MGR_NAME FROM EMP E, EMP M WHERE E.MGR=M.EMPNO; 3. Form EMP database, generate a department wise control break. Report with sub total and grand total using SQL*PLUS. Date Page No. S.V.University Teaching Employees

---------------------------------------------------------------Dept No. 10 Emp No. xxx xxx xxx Emp Name xxxx xxxx xxxx Dept. Total 20 xxx xxx xxxx xxxx Dept. Total Annual Salary xxxx xxxx xxxx ------xxxx ------xxxx xxxx ------xxxx ------xxxx

----------------------------------------------------------------

------------------------------------------------------------------Grand Total Solution : SQL>TTITLE S.V.University Teaching Employees SQL>COLUMN DEPTNO FORMAT 099 HEADING Dept|No. SQL>COLUMN EMPNO FORMAT 9999 HEADING Emp|No. SQL>COLUMN ENAME FORMAT A15 HEADING Emp|Name SQL>COLUMN ann_sal FORMAT 99,99,999 HEADING Annual|Salary SQL>BREAK ON DEPTNO SKIP 1 ON REPORT SQL>SELECT DEPTNO,EMPNO,ENAME, SAL*12 ann_sal FROM EMP ORDER BY DEPTNO; 4. Write a PL/SQL Code to Reverse a Number. Solution : SQL> SET SERVEROUTPUT ON SQL> DECLARE N NUMBER; RNUM NUMBER(15);

160

Lab Exercises

162

Lab Exercises
OPEN C1; LOOP FETCH C1 INTO REC; EXIT WHEN C1%NOTFOUND; IF REC.BASIC>=6000 THEN C:=20*REC.BASIC/100; ELSIF REC.BASIC>=4000 THEN C:=10*REC.BASIC/100; ELSE C:=5*REC.BASIC/100; END IF; UPDATE COM SET COMM=C WHERE ACNO=REC.ACNO; END LOOP; CLOSE C1; END; Table Contents Before Execution ACNO BASIC COMM 1001 6500 1002 5500 1003 3500 Table Contents After Execution SQL> SELECT * FROM com; ACNO BASIC COMM 1001 6500 1300 1002 5500 550 1003 3500 175

163

REM NUMBER; BEGIN N:=&N; RNUM:=0; DBMS_OUTPUT.PUT_LINE('GIVEN NUMBER = ' ||N); WHILE N>0 LOOP REM:=MOD(N,10); RNUM:=RNUM*10+REM; N:=FLOOR(N/10); END LOOP; DBMS_OUTPUT.PUT_LINE('REVERSE NUMBER = '||RNUM); END; 5. Write a PL/SQL program to calculate the commission for a given basic, to be calculated as follows. If basic>=6000 then the commission = 20% of Basic, but if basic>=4000 and basic<6000 then the commission = 10% of the basic otherwise the commission = 5% of the basic. The basic table is Com, the corresponding columns are Acno, Basic, Comm. Solution : 1. Create the Table com with the following Command : SQL>CREATE TABLE Com (Acno number(5), Basic number(10,2), Comm number(10,2)); 2. Insert the following Data into Com Table. Acno Basic 1001 6500 1002 5500 1003 3500 Hint : Use the following Insert Command SQL> INSERT INTO Com (Acno,Basic) VALUES(&Acno,&Basic); 3. Write the following PL/SQL code to update the comm. Filed. DECLARE CURSOR C1 IS SELECT * FROM COM; REC COM%ROWTYPE; C NUMBER(10,2); BEGIN

6. The base table is having the columns of Htno, Marks, Rank. Enter Htno, Marks in the base table up to 20 records. Write a PL/SQL program to update the base table while allocating rank? Solution : 1. Create the table with the name Results SQL>CREATE TABLE Results (Htno NUMBER(5), Marks NUMBER(3), Rank NUMBER(3)); 2. Insert the following Data 20 rows of data using following INSERT Command SQL>INSERT INTO Results (HTNO,MARKS) VALUES(&Htno, &Marks);

Lab Exercises
Sample data HTNO 1001 1002 1003 1004 1005 MARKS 500 500 500 550 550 RANK HTNO 1006 1007 1008 1009 1010 MARKS 450 350 250 650 545 RANK

164

Lab Exercises

165

3. Type and execute the following PL/SQL code to update Ranks with Results Table.
DECLARE CURSOR C1 IS SELECT * FROM RESULTS ORDER BY MARKS DESC; R NUMBER(3); REC RESULTS%ROWTYPE; PREV_MARKS NUMBER(3); BEGIN OPEN C1; R:=0; PREV_MARKS:=0; LOOP FETCH C1 INTO REC; EXIT WHEN C1%NOTFOUND; IF PREV_MARKS!=REC.MARKS THEN R:=R+1; END IF; UPDATE RESULTS SET RANK=R WHERE HTNO=REC.HTNO; PREV_MARKS:=REC.MARKS; END LOOP; CLOSE C1; EXCEPTION WHEN NO_DATA_FOUND THEN DBMS_OUTPUT.PUT_LINE('NO DATA FOUND IN RESULT TABLE'); END; 4. Type the following Query to list the Results table contents SQL> SELECT * FORM Result ORDER BY Rank; Output:

HTNO MARKS RANK HTNO MARKS RANK 1009 650 1 1002 500 4 1004 550 2 1003 500 4 1005 550 2 1006 450 5 1010 545 3 1007 350 6 1001 500 4 1008 250 7 7. Write a PL/SQL program to evaluate the HRA, Income Tax, Gross Salary, Net Salary? Base table is PAY and the corresponding columns are Enumber, Ename, Gcode, Basic, Da, Hra, Itax, Gsal, Netsal. Input from the base table is Enumber, Ename, Gcode, Basic,Da. Procedure : If Dcode =1 then Hra = 20% of Basic; Itax=3% of Basic, if Gcode = 2 then Hra=10% of Basic; Itax = 2% of Basic otherwise Hra = 5% of Basic; Itax=1% of Basic. Gross Salary (Gsal)= Basic + Da + Hra Net Salary (NSAL) = Gsal-Itax Solution : 1. Create the table PAY with the following structure CREATE TABLE PAY (Enumber NUMBER(5), Ename VARCHAR2(25), Gcode NUMBER(1), Basic NUMBER(9,2), Da NUMBER(7,2), Hra NUMBER(7,2), Itax NUMBER(7,2), Gsal NUMBER(10,2), Nsal NUMBER(10,2)); Sample data : Enumber Ename Gcode Basic Da Hra Itax Gsal Nsal 1001 Sita Devi G 1 5000 250 1002 Sri Laxmi .D 2 4500 200 1003 Madhavi .M 3 3500 150 1004 Rohini .K 1 5500 300 1005 Madhuri M 4 2500 150 2. Insert above data to Pay table using the following Insert command: SQL> INSERT INTO PAY (ENUMBER,ENAME,GCODE,BASIC,DA) VALUES(&ENUMBER,'&ENAME',&GCODE,&BASIC,&DA); 3. Type and execute the following PL/SQL Code to update the fields Hra, Itax, Gsal, Nsal. DECLARE

Lab Exercises
CURSOR C1 IS SELECT * FROM PAY; MHRA NUMBER(7,2); MITAX NUMBER(7,2); MGSAL NUMBER(10,2); MNSAL NUMBER(10,2); PAY_REC PAY%ROWTYPE; BEGIN OPEN C1; LOOP FETCH C1 INTO PAY_REC; EXIT WHEN C1%NOTFOUND; IF PAY_REC.GCODE=1 THEN MHRA := 20*PAY_REC.BASIC/100; MITAX:=3*PAY_REC.BASIC/100; ELSIF PAY_REC.GCODE=2 THEN MHRA := 10*PAY_REC.BASIC/100; MITAX:=2*PAY_REC.BASIC/100; ELSE MHRA := 5*PAY_REC.BASIC/100; MITAX:=1*PAY_REC.BASIC/100; END IF; MGSAL:=PAY_REC.BASIC+PAY_REC.DA+MHRA; MNSAL:=MGSAL-MITAX; UPDATE PAY SET HRA=MHRA,ITAX=MITAX,GSAL=MGSAL, NSAL=MNSAL WHERE ENUMBER=PAY_REC.ENUMBER; END LOOP; CLOSE C1; END; 4. Type the following Query To view the Table Contents SELECT * FROM PAY;
OUTPUT :

166

Lab Exercises

167

Enumber 1001 1002 1003 1004 1005

Ename Sita Devi G Sri Laxmi .D Madhavi .M Rohini .K Madhuri M

Gcode 1 2 3 1 4

Basic 5000 4500 3500 5500 2500

Da 250 200 150 300 150

Hra 1000 450 175 1100 125

Itax 150 90 35 165 25

Gsal 6250 5150 3825 6900 2775

Nsal 6150 5060 3790 6735 2750

8. Write a PL/SQL program to process the Xth class results for the following conditions? eng, tel, hin, mat, sci, soc>34 is pass, total is>=360 I class, total is >=300 II Class, otherwise III Class? The Base table is Tenth and corresponding fields are Htno, Eng, Tel, Hin, Mat, Sci, Soc, Total, Result. Input Htno, Eng, Tel, Hin, Mat,Sci,Soc. Solution : 1. Create the table TENTH with the following structure CREATE TABLE TENTH (Htno NUMBER(5), Eng NUMBER(3), Tel NUMBER(3), Hin NUMBER(3), Mat NUMBER(3), Sci NUMBER(3), Soc NUMBER(3), Total NUMBER(3), Result VARCHAR2(10)); Sample data : HTNO ENG TEL HIN MAT SCI SOC Total Result 1001 75 80 85 95 87 89 1002 85 90 30 90 50 65 1003 55 60 45 62 68 57 1004 40 42 35 42 45 48 2. Insert above data to TENTH table using the following Insert command : SQL> INSERT INTO tenth (HTNO,ENG,TEL,HIN,MAT,SCI,SOC) VALUES(&HTNO,&ENG,&TEL,&HIN,&MAT,&SCI,&SOC); 3. Type and execute the following PL/SQL Code to update the fields Total and Result. DECLARE CURSOR C1 IS SELECT * FROM TENTH; MTOTAL NUMBER(3); MRESULT VARCHAR2(10); XTH_REC TENTH%ROWTYPE; MENG NUMBER(3); MTEL NUMBER(3); MHIN NUMBER(3); MMAT NUMBER(3); MSCI NUMBER(3); MSOC NUMBER(3);

Lab Exercises

168

Lab Exercises

169

BEGIN OPEN C1; LOOP FETCH C1 INTO XTH_REC; EXIT WHEN C1%NOTFOUND; MENG:=XTH_REC.ENG; MTEL:=XTH_REC.TEL; MHIN:=XTH_REC.HIN; MMAT:=XTH_REC.MAT; MSCI:=XTH_REC.SCI; MSOC:=XTH_REC.SOC; MTOTAL:=MENG+MTEL+MHIN+MMAT+MSCI+MSOC; IF MENG<35 OR MTEL<35 OR MHIN<35 OR MMAT<35 OR MSCI<35 OR MSOC<35 THEN MRESULT:=''; ELSIF MTOTAL>=360 THEN MRESULT:='I CLASS'; ELSIF MTOTAL>=300 THEN MRESULT:='II CLASS'; ELSE MRESULT:='III CLASS'; END IF; UPDATE TENTH SET TOTAL=MTOTAL,RESULT=MRESULT WHERE HTNO=XTH_REC.HTNO; END LOOP; CLOSE C1; END; 4. Type the following Query To view the Table Contents SELECT * FROM PAY; OUTPUT : HTNO ENG TEL HIN MAT SCI SOC Total Result 1001 75 80 85 95 87 89 511 I CLASS 1002 85 90 30 90 50 65 410 1003 55 60 45 62 68 57 347 II CLASS 1004 40 42 35 42 45 48 252 III CLASS 9. Write a PL/SQL program to calculate Electrical charges as per the rates given below: Industry Rs.2-00, Agricultural Rs. 1-00, Domestic Rs. 1-50. The fields of the base table are Cno (Customer Number), Cat (Category) , PMR (Previous Month Reading), CMR (Current Month Reading), RATE (Rate Per Unit), CU (Consumed Units),TCHARGE (Total Charge). The

program output should be Rate, CU, Tcharge; Enter the input Cno, Cat, Pmr, Cmr in the base table ELEC Solution : 1. Create the table TENTH with the following structure CREATE TABLE elec (CNO NUMBER(5), CAT VARCHAR2(15), PMR NUMBER(8), CMR NUMBER(8), CU NUMBER(8), RATE NUMBER(6,2), TCHARGE NUMBER(15,2)); Sample data : CNO CAT PMR CMR CU RATE TCHARGE 1001 INDUSTRY 4000 6500 1002 AGRICULTURAL 2500 3500 1003 DOMESTIC 1500 2200 2. Insert above data to ELEC table using the following Insert command : SQL> INSERT INTO elec (CNO,CAT,PMR,CMR) VALUES(&CNO,upper(&CAT),&PMR,&CMR); 3. Type and execute the following PL/SQL Code to update the fields Cu, Rate, and Tcharge. DECLARE CURSOR C1 IS SELECT * FROM ELEC; MCU NUMBER; MRATE NUMBER; MTC NUMBER; E_REC ELEC%ROWTYPE; BEGIN OPEN C1; LOOP FETCH C1 INTO E_REC; EXIT WHEN C1%NOTFOUND; MCU:=E_REC.CMR-E_REC.PMR; IF E_REC.CAT='INDUSTRY' THEN MRATE:=2.00; ELSIF E_REC.CAT='AGRICULTURAL' THEN MRATE:=1.00; ELSIF E_REC.CAT='DOMESTIC' THEN

Lab Exercises

170

Lab Exercises

171

MRATE:=1.50; END IF; MTC:=MCU*MRATE; UPDATE ELEC SET CU=MCU,RATE=MRATE,TCHARGE=MTC WHERE CNO=E_REC.CNO; END LOOP; CLOSE C1; END; 4. Type the following Query To view the Table Contents SELECT * FROM ELEC; OUTPUT : CNO CAT PMR CMR CU RATE TCHARGE 1001 INDUSTRY 4000 6500 2500 2 5000 1002 AGRICULTURAL 2500 3500 1000 1 1000 1003 DOMESTIC 1500 2200 700 1.5 1050

Hint: StartProgramsPersonal Oracle 7 for Windows 95 Start Database. 2. Start Developer/2000 and click Cancel on Welcome to the Form Builder Dialog. Hint: StartProgramsDeveloper 2000 R2.0Form Builder 3. Connect Database to Form Builder by Pressing CTRL+J, Connect Dialog will appear.

10. Create a master detail form called dept-emp which should allow the basic data manipulation operations i.e. insert, update, delete and query on DEPT(Master) and EMP(Detail) Table.
EMP Table Structure Emp-no Char(6) Primary Key Emp-name Varchar2(15) Dept-no Char(6) foreign key dept(dept-no) Hiredate Date Mgr Char(6) foreign key emp(emp-no) Sal Number(6,2) DEPT table Structure Dept-no Char(6) Primary Key DeptName Varchar2(15) Location Varchar2(15) The Validations that must be performed are : a. While querying a department automatically all the employee working in the department will be queried b. A department cannot be deleted from the DEPT table, if there are corresponding employees in the EMP table. c. Add the LOV called dept list to the text item called deptno in the DEPT table, when the form opens a list of department numbers should be displayed. Procedures : 1. Start Oracle Database

In this Dialog Type User Name and Password and click on Connect or Press Enter Key. 4. Creating Canvas for Window. Hint: a) Click on Canvas Node, Choose Create from Navigator Menu. New Canvas? Will appear under Canvas Node, Click on new Canvas?, change the canvas name to DEPT_EMP_CAN and press enter key. 5. Creating Master Block i) Click on Data Blocks Node, Select Create from Navigator Menu, New Data Block Dialog Will apper ii) Click on Use the Data Block Wizard and Click OK Button, Data Block Wizard Dialog Will Appear iii) Click On Next Button Twice, Type DEPT at table or view field, Click Refresh, Click >> Button, and Click Next Button. iv) Click on Finish, Layout Wizard Will appear v) Click Next Button Twice, click on >> Button, Click on Next Button Thrice, Type Department Details at Frame Title Field and click on Finish Button. 6. Creating Detail Block i) Choose Object Navigator from Tools Menu, Click on Data Blocks node, Select Create from Navigator Menu, New Data Block Dialog Will Appear. ii) Click on Use the Data Block Wizard and Click OK Button, Data Block Wizard Dialog Will Appear iii) Click On Next Button Twice, Type EMP at table or view field, Click Refresh, Click >> Button, and Click Next Button. iv) Click on Create Relationship Button, Data Blocks Dialog Will Appear, Click DEPT, Click OK Button and Click Next Button.

Lab Exercises

172

Lab Exercises

173

7.

8. 9. 10. 11.

v) Click on Finish, Layout Wizard Will appear v) Click Next Button Twice, click on >> Button, Click on Next Button Twice, Click on Tabular Radio Button, Click on Next Button, Type Employee Details at Frame Title Field, Type 10 at Records Displayed Field and click on Finish Button. Press Ctrl+R to run the Form Module, Click on Deptno Field at department Details Block, Choose Execute from Query Menu, Use Up or Down Arrows key to view the respective department Employees. Select Exit from Action Menu To close the run Form. Save the form with the name Emp_dept by pressing CTRL+S Select Exit from File Menu To Close Forms Builder. Create a table called CANDIDATE with the following attributes : Candidate-no Char(6) primary key Candidate-name Varchar2(15) Course-name Varchar2(15) Sex Char(1) Marital-status Char(1) Create a form called candidate which should allow insert, update, delete and query.

Validations: a) The candidate-no should be of 6 characters b) The course-name should be displayed as a list item (pop list). The default should be COMPUTER. c) The sex should be displayed as a check box. The default should be checked. d) Marital status should be displayed as a radio group with the following radio buttons Radio Button Value to be stored Married M Unmarried U Widow W The Default should be Unmarried. e) Attach a LOV to candidate-no, which should open immediately when your start the from. Solution : 1. Start Oracle Database Hint: StartProgramsPersonal Oracle 7 for Windows 95 Start Database. 2. Start SQL Hint:StartPrograms Oracle for windows 95 SQL Plus 3.3, Connect Dialog Will Appear, Type SCOTT at User Name Field, Type TIGER at Password Field and click on Connect Button. 3. Create the Candidate Table Using the following SQL Command. CREATE TABLE candidate (Candidate_No CHAR(6) Primary Key, Candiate_Name VARCHAR2(15), Course_Name VARCHAR2(15), Sex CHAR(1), Marital_Status CHAR(1)); 4. Type Exit at SQL Prompt to exit from SQL. 5. Start Developer/2000 and click Cancel on Welcome to the Form Builder Dialog. Hint: StartProgramsDeveloper 2000 R2.0Form Builder 6. Connect Database to Form Builder by Pressing CTRL+J, Connect Dialog will appear.

Lab Exercises

174

Lab Exercises
Property List of Values

175
Setting Choose LOV Name from list

In this Dialog Type User Name and Password and click on Connect or Press Enter Key. 7. Creating Canvas for Window. Hint: a) Click on Canvas Node, Choose Create from Navigator Menu. New Canvas? Will appear under Canvas Node, Click on new Canvas?, change the canvas name to CANDIDATE_CAN and press enter key. 8. Creating CANDIDATE Block i) Click on Data Blocks Node, Select Create from Navigator Menu, New Data Block Dialog Will apper ii) Click on Use the Data Block Wizard and Click OK Button, Data Block Wizard Dialog Will Appear iii) Click On Next Button Twice, Type CANDIDATE at table or view field, Click Refresh, Click >> Button, and Click Next Button. iv) Click on Finish, Layout Wizard Will appear v) Click Next Button Twice, click on >> Button, Click on Next Button Thrice, Type Candidate Details at Frame Title Field and click on Finish Button. 9. Choose Layout Editor from Tools Menu. 10. Place 4 Buttons on the canvas and set the following properties Object Property Setting Button1 Name Insert Label Insert Button2 Name Update Label Update Button3 Name Delete Label Delete Button4 Name Query Label Query 11. Attach the following Code to the corresponding Button. Object Trigger Code Insert When-Button-Pressed Create_Record; Update When-Button-Pressed Commit_Form; Delete When-Button-Pressed Delete_Record; Query When-Button-Pressed Execute_Query; 12. Choose Object Navigator from Tools Menu, Click on LOV Node, Choose Create from Navigator Menu,New Lov Dialog Will Appear as show in next page. 13. Type following Query at Query Text and Click OK button. SELECT Candidate_No FROM Candidate; 14. Choose Layout Editor from Tools Menu, Select Candidate_No Properties Pallette by right clicking on the item and set the following properties

15. Select PL/SQL Editor by right clicking on the Candidate_No and type the following Code. Object Trigger Code Candi PostBEGIN date_No TextIF LENGTH(:Candidate_No)<6 Item

THEN MESSAGE('Candidate Number Should Not < 6 Characters'); MESSAGE('Candidate Number Should Not < 6 Characters'); Raise Form_trigger_failure; END IF; End;

16. Select Property Palette by right clicking on the Course_Name item and set the following properties Object Property Setting Course_Name Item Type List Item Elements in List Computers Economics Mathematics English Initial Value Computers 17. Select Property Palette by right clicking on the Sex Field and set the following Properties

Lab Exercises
Object Sex Property Item Type Value When Checked Value When Unchecked Sample Output Setting Check Box M F

176

Lab Exercises
20. 21. 22. 23.

177

Press CTRL+R to run the form. Choose Exit from Action to close the form Save the form with name Candidate by selecting FileSave Quit the form builder by Pressing Alt+F4.

12. Let us consider the following tables.


ORDERS
Order-no Number(6) Primary Key Cust-no

CUSTOMER
Number(6) Primary key

18. Select Property Palette by right Clicking on the Marital Status Filed and set the following Properties Object Property Setting Marital_ Item Type Radio Group Status 19. Place 1 Text Control (Note: not text item), Name it as Marital Status, 3 Radio Buttons on the canvas belongs to Marital Status item and set the following Properties Object Property Setting Radio Name MARRIED Button1 Label Married Radio Button Value M Radio Name UNMARRIED Button2 Label Unmarried Radio Button Value U Radio Name WIDOW Button3 Label Widow Radio Button Value W

Cust-no Number(6) Cust-name Varchar2(15) Order-date Date City Varchar2(15) Ship-date Date State Varchar2(15) Order-filled Char(1) Pin Number(6) Payment-mode Varchar2(15) Credit-History Varchar2(15) Create a form called order form which will allow the operator to make data entry into the ORDER table as per the following. a. Create a block called ord-blk based on ORDER table b. Create a text-item called cust-name in ord-blk which should have same size and type as cust-name of customer table. c. Write a trigger which will display customer-name in the ord-blk whenever operator enter a customer-no into cust-no text item of ord-blk. d. Create a LOV called cust-list and add it to cust-no of ord-blk. Create a LOV button which will display cust-list LOV when the operator click it. The button should be iconic. So the customer-no can be either selected from cust-list LOV or entered through standard input device. e. Ship-date should be 15 Days after the order-date. Write a trigger which will display ship-date as soon as the operator enter the orderdate. However the operator can modify the ship-date. f. The order-filled column indicate whether the order has been executed or not. It should be displayed as a checkbox. The default should be checked. Write a trigger which will make ship-date unupdatable when the order-filled is checked. g. The payment-mode column should be displayed as a radio group with the following radio buttons. Radio buttons Values Check Check Cash Cash Credit Card Credit Card Write a trigger which will verify the Credit History of the customer, when the operator select a radio button from the payment mode radio group. If the credit history is POOR the payment mode must be CASH. (credit history can be Excellent, Good, Poor) Solution :

Lab Exercises

178

Lab Exercises

179

1. Start Oracle Database Hint: StartProgramsPersonal Oracle 7 for Windows 95 Start Database. 2. Start SQL Hint: StartPrograms Oracle for windows 95 SQL Plus 3.3, Connect Dialog Will Appear, Type SCOTT at User Name Field, Type TIGER at Password Field and click on Connect Button. 3. Create the CUSTOMER Table Using the following SQL Command. CREATE TABLE customer (Cust_No NUMBER(6) Primary Key, Cust_Name VARCHAR2(15), City VARCHAR2(15), State VARCHAR2(15), Pin NUMBER(6), Credit_History VARCHAR2(15)); 4. Insert data into CUSTOMER table using the following Insert command. Cust_No Cust_Name City State Pin Credit_History 1001 MADHAVI TENALI A.P. 522201 GOOD 1002 ROHINI CHITTOOR A.P. 522326 POOR 1003 SUDHA TIRUPATI A.P. 517501 EXCELLENT 1004 MADHU TIRUPATI A.P. 517501 GOOD 1005 SRIDEVI NELLORE A.P. 517501 POOR INSERT INTO CUSTOMER VALUES(&Cust_no,&Cust_Name,&City,&State, &Pin,&Credit_History); 5. Create the ORDERS Table Using the following SQL Command. CREATE TABLE orders (Order_No NUMBER(6) Primary Key, Cust_No NUMBER(6), Order_Date DATE, Order_filled CHAR(1), Ship_date DATE, Payment_Mode VARCHAR2(10)) ; 6. Type Exit at SQL Prompt to exit from SQL. 7. Start Developer/2000 and click Cancel on Welcome to the Form Builder Dialog. Hint: StartProgramsDeveloper 2000 R2.0Form Builder 8. Connect Database to Form Builder by Pressing CTRL+J, Connect Dialog will appear.

In this Dialog Type User Name and Password and click on Connect or Press Enter Key. 9. Creating Canvas for Window. Hint: a) Click on Canvas Node, Choose Create from Navigator Menu. New Canvas? Will appear under Canvas Node, Click on new Canvas?, change the canvas name to ORDERS_CAN and press enter key. 10. Creating ORDERS Block i) Click on Data Blocks Node, Select Create from Navigator Menu, New Data Block Dialog Will apper ii) Click on Use the Data Block Wizard and Click OK Button, Data Block Wizard Dialog Will Appear iii) Click On Next Button Twice, Type ORDERS at table or view field, Click Refresh, Click >> Button, and Click Next Button. iv) Click on Finish, Layout Wizard Will appear v) Click Next Button Twice, click on >> Button, Click on Next Button Thrice, Type Orders Details at Frame Title Field and click on Finish Button. 11. Choose Layout Editor from Tools Menu. 12. Place 4 Buttons on the canvas and set the following properties Object Property Setting Button1 Name Insert Label Insert Button2 Name Update Label Update Button3 Name Delete Label Delete Button4 Name Query Label Query 13. Attach the following Code to the corresponding Button. Object Trigger Code Insert When-Button-Pressed Create_Record; Update When-Button-Pressed Commit_Form; Delete When-Button-Pressed Delete_Record; Query When-Button-Pressed Execute_Query;

Lab Exercises

180

Lab Exercises

181

14. Choose Object Navigator from Tools Menu, Click on LOV Node, Choose Create from Navigator Menu, New Lov Dialog Will Appear as show in next page.

MESSAGE('INVALID CUSTOMER'); RAISE FORM_TRIGGER_FAILURE; END;


19. Place Command Button in Between CUST_NO and CUST_NAME and set the following properties Object Property Setting Button Name BTN_CUST_NO Label Cust No. Iconic Yes Width 25 Height 25 20. Attach the following Code to Newly added Button Object Trigger Code Button WhenDECLARE Buttona_value_chosen BOOLEAN; Pressed

15. Type following Query at Query Text and Click OK button. SELECT Cust_No FROM Customer; 16. Choose Layout Editor from Tools Menu, Select Properties Palette by right clicking on Cust_No item and set the following properties Property Setting List of Values Choose LOV Name from list 17. Add Display Item to the right of the Cust_no field and set the following properties Object Property Setting Display_Item Name Cust_Name Database Item No 18. Select PL/SQL Editor by right clicking on the Cust_No and type the following Code. Object Trigger Code Cust_No PostBEGIN TextSELECT CUST_NAME,Credit_History Item

BEGIN a_value_chosen := Show_Lov('LOV13'); IF NOT a_value_chosen THEN Message('You have not selected a value.'); Message('You have not selected a value.'); Bell; RAISE Form_Trigger_Failure; END IF; END;

INTO :ORDERS.CUST_NAME, :GLOBAL.C_History FROM CUSTOMER WHERE CUST_NO=:ORDERS.CUST_NO; Exception WHEN NO_DATA_FOUND THEN MESSAGE('INVALID CUSTOMER');

21. Attach the following code to Order Date. Object Trigger Code Order- POST:ORDERS.SHIP_DATE Date TEXT- :=:ORDERS.ORDER_DATE+15; ITEM 22. Choose Property Palette by right clicking on the ORDER_FILLED Field and set the following Properties Object Property Setting ORDER_FILLED Item Type Check Box Value When Checked Y Value When Unchecked N Check Box Mapping of other Values Unchecked 23. Attach the following code to ORDER_FILLED Object Trigger Code
ORDERFILLED WHENVALIDATEBegin if Checkbox_checked(

Lab Exercises
ITEM

182
'orders.order_filled') then set_item_property('orders.ship_date', ENABLED,PROPERTY_OFF); else set_item_property('orders.ship_date', ENABLED,PROPERTY_TRUE); end if; end;

Lab Exercises

183

24. Select Property Palette by right Clicking on the Payment Mode Filed and set the following Properties Object Property Setting Payment_ Item Type Radio Group Mode Initial Value Cash 25. Place 1 Text Control, Name it as Payment Mode, 3 Radio Buttons on the canvas belongs to Payment Mode item and set the following Properties Object Property Setting Radio Name Check Button1 Label Check Radio Button Value Check Radio Name Cash Button2 Label Cash Radio Button Value Cash Radio Name Credit_Card Button3 Label Credit Card Radio Button Value Credit Card 26. Attach the following Code to Payment Mode Item Object Trigger Code
Payment_ Mode WHENRADIOCHANGED DECLARE P_MODE VARCHAR2(15); BEGIN P_MODE:=:ORDERS.PAYMENT_MODE; IF :GLOBAL.C_HISTORY='POOR' AND P_MODE IN('Cheque','Credit Card') THEN MESSAGE('PAYMENT SHOULD BE MADE IN TERMS OF CASH'); MESSAGE('PAYMENT SHOULD BE MADE IN TERMS OF CASH'); raise form_trigger_failure; END IF; EXCEPTION WHEN OTHERS THEN NULL; END;

29. Save the form with name Orders by selecting FileSave 30. Quit the form builder by Pressing Alt+F4.

13. Create a library, which will include the following mathematical functions : a) ADD(x,y) b) DIFF(x,y) c) PROD(x,y) d) DIV(x,y) Create four separate forms module i.e. Add-form, Subform, Prod-form, Div-Form. Create a menu module to integrate all these forms module. The menu module will have the following items.
FORMS EDIT ACTION Add-form Cut Clear Diff-form Copy Exit Prod-Form Paste Div-form Solution : 1. Start Oracle Database Hint: StartProgramsPersonal Oracle 7 for Windows 95 Start Database.

27. Press CTRL+R to run the form. 28. Choose Exit from Action to close the form

Lab Exercises

184

Lab Exercises
DIV

185

2. Start Developer/2000 and click Cancel on Welcome to the Form Builder Dialog. Hint: StartProgramsDeveloper 2000 R2.0Form Builder 3. Connect Database to Form Builder by Pressing CTRL+J, Connect Dialog will appear.

FUNCTION DIV(X IN NUMBER, Y IN NUMBER) RETURN NUMBER IS BEGIN RETURN X/Y; END;

5. Save the PL/SQL Library with the name MYLIB by pressing Ctrl+S and Close the PL/SQL Library by Pressing Ctrl+W. Creating Form Modules : 6. Create the following 4 form modules and it separately with the names ADD, DIFF, PROD, DIV. a) Click on Form Node, Choose Create from Navigator Menu.

In this Dialog Type User Name and Password and click on Connect or Press Enter Key. Creating PL/SQL Libraries : 4. Creating PL/SQL Libraries, ADD, DIFF, PROD, and DIV. a) Click on PL/SQL Libraries Node, Choose Create from Navigator Menu b) Click on Program Units Node, Choose Create from Navigator menu, New Program Unit Dialog Will appear. c) Click on Function Radio Button, type ADD(X IN NUMBER, Y IN NUMBER) at Name Field and click OK Button. d) Remove Softhyphen from function first line and type NUMBER. e) Type the following Code between BEGIN and END return x+y; f) Press Compile Button, and click on Close Button. g) In the same way create the remaining functions as given below. Object ADD Code

DIFF

PROD

FUNCTION ADD(X IN NUMBER, Y IN NUMBER) RETURN NUMBER IS BEGIN RETURN X+Y; END; FUNCTION DIFF(X IN NUMBER, Y IN NUMBER) RETURN NUMBER IS BEGIN RETURN X-Y; END; FUNCTION PROD(X IN NUMBER, Y IN NUMBER) RETURN NUMBER IS BEGIN RETURN X*Y; END;

b) Click on Attached Libraries node, Choose Create from Navigator Menu, Attached Library Dialog Will Appear, Click on Browse Select the MYLIB.PLL and Click ATTACH Button, An Alert will appear, Click on NO button. c) Click on Canvas Node, Choose Create from navigator Menu,

Lab Exercises

186

Lab Exercises

187

d) Click on Data blocks Note, Choose Create from navigator menu, New Data Block Dialog will appear e) Choose Build a new data block manually and click OK button, Change the Name property of New Created block to ADD_BLK. f) Design the form as shown above. g) attach the following code to command button ADD Object Trigger Code ADD WHEN:RESULT:=:ADD_BLK.N1+:ADD_BLK.N2; BUTTONPRESSED DIFF WHEN:RESULT:=:DIFF_BLK.N1-:DIFF_BLK.N2; BUTTONPRESSED PROD WHEN:RESULT:=:PROD_BLK.N1*:PROD_BLK.N2; BUTTONPRESSED DIV WHEN:RESULT:=:DIV_BLK.N1/:DIV_BLK.N2; BUTTONPRESSED h) Save the from module with the name ADD i) Press Ctrl+R to run the form module. j) Close the form Module. 7. In the same way Create the remaining 3 forms. Creating Menu Module : 8. Click on Menus Node, Choose Create from Navigator Menu. 9. Select Menu Editor from Tools Menu and design User defined menu as show below

ProdForm OPEN_FORM('C:\MYDOCU~1\Prod'); DivForm OPEN_FORM('C:\MYDOCU~1\Div'); Cut CUT_REGION; Copy COPY_REGION; Paste PASTE_REGION; Clear CLEAR_BLOCK; Exit EXIT_FORM 10. Save the menu Module with name MENU by pressing Ctrl+S. 11. Generate the Menu Module by pressing Ctrl+T and Close the Menu Module by pressing Ctrl+W. 12. Linking Menu Module to existing forms, Open the above designed form modules, and set the following property. Object Property Setting Add Menu Module C:\mydocu~1\menu.mmx Diff Menu Module C:\mydocu~1\menu.mmx Prod Menu Module C:\mydocu~1\menu.mmx Div Menu Module C:\mydocu~1\menu.mmx 13. Press Ctrl+R to run the from. 14. Create a report on order-table customer wise 1. Start Report Builder , Welcome to Report Builder Dialog will appear. Hint: a) StartProgramsDeveloper 2000 R2.0Report Builder 2. Click on Use the Report Wizard and Click on OK Button, Report Wizard Will appear 3. Click on Next Button, Type Customer wise Orders Report at Title Field, Click Group Left and Click on Next Button. 4. Click on Connect Button, Type SCOTT/TIGER at User Name Field, Choose Connect in connect dialog. 5. Type the following Query at SQL Query Statement field, and click Next Button SELECT CUST_NO,ORDER_NO,ORDER_DATE, SHIP_DATE, PAYMENT_MODE FROM ORDERS ORDER BY CUST_NO 6. Click on Cust_no field, Click on > Button and click on Next button. 7. Click on >> field and click on Next Button 3 time and finally Click on Finish Button. 8. The following report will Appear. 9. Save the report with the name CUSTREP by pressing Ctrl+S.

Code : Object AddForm DiffForm

Code

OPEN_FORM('C:\MYDOCU~1\ADD'); OPEN_FORM('C:\MYDOCU~1\DIFF');

Lab Exercises

188

Lab Exercises 189 9. Save the report with the name ORDERSREP by pressing Ctrl+S.

15. Create a report on order-table based on order-wise

1. Start Report Builder , Welcome to Report Builder Dialog will appear. Hint: a) StartProgramsDeveloper 2000 R2.0Report Builder 2. Click on Use the Report Wizard and Click on OK Button, Report Wizard Will appear 3. Click on Next Button, Type Order wise Orders Report at Title Field, Click Group Left and Click on Next Button. 4. Click on Connect Button, Type SCOTT/TIGER at User Name Field, Choose Connect in connect dialog. 5. Type the following Query at SQL Query Statement field, and click Next Button SELECT ORDER_NO, CUST_NO,ORDER_DATE, SHIP_DATE, PAYMENT_MODE FROM ORDERS ORDER BY ORDER_NO 6. Click on Order_no field, Click on > Button and click on Next button. 7. Click on >> field and click on Next Button 3 time and finally Click on Finish Button. 8. The following report will Appear.

16. Create a report on order-table based on payment-mode 1. Start Report Builder , Welcome to Report Builder Dialog will appear. Hint: a) StartProgramsDeveloper 2000 R2.0Report Builder 2. Click on Use the Report Wizard and Click on OK Button, Report Wizard Will appear 3. Click on Next Button, Type Payment wise Orders Report at Title Field, Click Group Left and Click on Next Button. 4. Click on Connect Button, Type SCOTT/TIGER at User Name Field, Choose Connect in connect dialog. 5. Type the following Query at SQL Query Statement field, and click Next Button SELECT PAYMENT_MODE, ORDER_NO, CUST_NO, ORDER_DATE, SHIP_DATE FROM ORDERS ORDER BY PAYMENT_MODE 6. Click on Payment_mode field, Click on > Button and click on Next button. 7. Click on >> field and click on Next Button 3 time and finally Click on Finish Button.

8. The following report will Appear.

Lab Exercises

190

9. Save the report with the name ORDERSREP by pressing Ctrl+S.

You might also like