You are on page 1of 60

Lab Assignment File

J2EE-Enterprise Java

Name of Student
(CDAC PRN)

Diploma in IT Architecture
Term-II
Year: 2017-18

CDAC & C.V. Raman College of Engineering, Bhubaneswar

Date of Submission:
Table of Index
S.No. List of Practical Date Signature
01
02
03
04
05
06
07
08
09
10

1. Create a directory on HDFS in your home directory (Hadoop Distributed File


System).
Ans : hadoop fs -mkdir /dir1
hadoop fs -ls

2.Create two more directories in a single command in your home directory.

Ans:-hadoop fs -mkdir /dir3 /dir4


hadoop fs -ls /
3.List the directories created in HDFS and check in what sort order are the
contents listed by default?
Ans:-hadoop fs -ls /

by
4. Create a sample file (eg: sample.txt) in any of the directories created
above.
Ans:-hadoop fs -touchz /dir2/sample.txt
To check files & directories under a directory
hadoop fs -ls -R /

5. Copy a file from local file system to one of the directories created on HDFS.
(This process
of copying file from local file system to HDFS called as Uploading files to HDFS).
Ans:-touch sample2.txt
hadoop fshadoop fs -ls /dir3 -put /home/batch_2/sample2.txt /dir3
6. View the uploaded file.
Ans:-hadoop fs -ls -R /

7. Copy one more file from local file system to HDFS to another directory
created.
Ans:-touch sample.txt
hadoop fs -copyFromLocal /home/batch_2/sample3.txt /dir4

8. Copy a file from HDFS to local file system (This is called as Downloading a
file from HDFS
to local file system)
Ans:-

9. Look at the contents in the file that is uploaded on HDFSAns:-hadoop fs


-copyToLocal /sample.txt /home/batch_2/Desktop

10. Copy the file from one directory to another directory in HDFS.
Ans:-hadoop fs -cp /dir4/abc.txt /dir3
11. Move the file from one directory to another directory in HDFS.Ans:-hadoop
fs -mv /dir4/sample3.txt /dir3

12. Copy a file from/To Local file system to HDFS. Use copyFromLocal and
copyToLocal
commands
Ans:-hadoop fs -copyFromLocal /home/batch_2/abc.txt /dir1
hadoop fs -copyToLocal /sample.txt /home/batch_2/Desktop

13. Display last few lines from the file in HDFS.


Ans:-gedit text1.txt
hadoop fs -copyFromLocal /home/batch_2/text1.txt /dir3
hadoop fs -ls -R /hadoop fs -cat /dir3/text1.txt | tail -n 2
14.Display the size of the file in KB and MB in the HDFS.
Ans:-hadoop fs -du -h /dir3/text1.txt
15. Append a file
from Local File to
system to file on HDFS
Ans:-gedit text1.txt
hadoop fs -appendToFile /home/batch_2/text1.txt /dir3/text1.txt

hadoop fs -cat /dir3/text1.txt

16. Merge two file contents (files present on HDFS) in to one file (this file
should be present
on Local file system)
Ans:-gedit text2.txt
hadoop fs -copyFromLocal /home/batch_2/text2.txt /dir3
hadoop fs -getmerge /dir3/text2.txt /home/batch_2/text1.txt
17.17. Get Access Control Lists (ACL's) of the files and directories created.
18. Copy one directory structure to another.
Ans:-
hadoop fs -cp -p /dir3 /dir1
19. Set the replication to the file created to 2.
Ans:- hadoop fs -setrep 2 /dir3

20. Remove a file from the directory in HDFS. Ans:-hadoop fs -rm /dir4/abc.txt
21.Remove a directory in HDFS.
Ans:- hadoop fs -rmr /dir4

22.check wether a file or directory exist in hdfs or not


Ans:-hdfs fs -test -e /text1.txt

23.count number of files in directoryAns:-hadoop fs -count /dir1


Assignment _hBASE
2. A company ABC started with five employees with Mr. A, Mr. B, Miss C, Miss D, Mr. E.
They were designated
as Principal Technical Officer, Senior Technical Officer, Technical Officer, Project
Engineer. Their
initial salary was 10000, 8000, 6000, 4000, 2000 respectively for the month of
January. They were given
2 Restricted holidays, 8 Paid Leave, 2 Casual Leave for first six months. Create the
table for the same
and perform the following:

a. Mr. E got promotion and he was now Technical officer.


Ans:-
put 'company','5','Employee:designation','Technical officer'

b. Mr. F has replace Mr. E as Project engineer.


Ans:-put 'company','5','Employee:Name','mr. F'
c. Mr. A, B got increment in their salary from March onwards.
Ans:- put 'company','1','salary:amount','150000'

put 'company','2','salary:amount','15000'
d. Miss D, C whereas got added to their salaries for the month of april only.
Ans:- put 'company','3','salary:amount','10000'

put 'company','4','salary:amount','9000'

e. Mr. E has got a increment to his salary but for May only.
Ans:-put 'company','5','salary:amount','7000'
f. Mr. B has been promoted and now he's Principal Technical Officer from June
onwards.
Ans:-put ‘company’,’2’,’Employee:designation’,’principal Technical officer’

g. Mr. A has left the job.


Ans:- deleteall ‘company’,’a’

h. Miss C is now our new Senior Technical Officer.


Ans:-put ‘company’,’3’,’Employee:designation’,’senior Technical officer’
i. As per circular their paid leave has been reduced to 6 in February.
Ans:- put ‘company’,’1’,’Leave:paid’,’6’
j. Miss D has utilized all her casual leave
Ans:- put ‘company’,’4’,’Leave:paid’,’0’
k. Mr. M has joined the company as Joint Director and Mr. B will work
under him.
Ans:-
Ques1 Create a table customer_data.
Ans:- create ‘customer_data’

Ques2 Insert the data in customer_data using appropriate command.

Ques3 Write command to check whether a table exists or not.


Ans:-exists ‘customer_data’

Ques4 Change the value of $500.00 to $1500.00 in amount column.

Ans:- put 'customer_data','103','sales:amount','$1500.00'

Ques5 List the values present in row 103 only.


Ans:- get 'customer_data','103'

Ques6 Delete the city value of row id 104.


Ans:- delete 'customer_data','104','customer:city'

Ques7 Drop table customer_data.


Ans:-disable ‘customer_data’
drop ‘customer_data’

Ques1. Insert the above data in hbase table using commands.

Ques2. Write the command to check the no. of rows in a table.


Ans:-count ‘customer_data’

Ques3. Write command to check all the tables in hbase.

Ans:-list

Ques4 Change the value of amount in customer_id 104 to $1800.00.


Ans:-put 'customer_data','104','sales:amount','$1800.00'

Ques5. Drop the table from the hbase.


Ans:-disable ‘customer_data’
drop ‘customer_data’

ASSIGNMENT_MapReduce

1.odd even program

import java.io.IOException;

import org.apache.hadoop.conf.Configuration;

import org.apache.hadoop.fs.Path;

import org.apache.hadoop.io.IntWritable;

import org.apache.hadoop.io.LongWritable;

import org.apache.hadoop.io.Text;

import org.apache.hadoop.mapreduce.Job;

import org.apache.hadoop.mapreduce.Mapper;

import org.apache.hadoop.mapreduce.Reducer;

import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;

import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

import org.apache.hadoop.util.GenericOptionsParser;

public class Odd_Even {


public static void main(String [] args) throws Exception

Configuration c=new Configuration();

String[] files=new GenericOptionsParser(c,args).getRemainingArgs();

Path input=new Path(files[0]);

Path output=new Path(files[1]);

Job j=new Job(c,"ODDEVEN");

j.setJarByClass(Odd_Even.class);

j.setMapperClass(MapForOdd_Even.class);

j.setReducerClass(ReduceForOdd_Even.class);

j.setOutputKeyClass(Text.class);

j.setOutputValueClass(IntWritable.class);

FileInputFormat.addInputPath(j, input);

FileOutputFormat.setOutputPath(j, output);

System.exit(j.waitForCompletion(true)?0:1);

public static class MapForOdd_Even extends Mapper<LongWritable, Text, Text, IntWritable>{

public void map(LongWritable key, Text value, Context con) throws IOException, InterruptedException

String line = value.toString();

String[] words=line.split(",");

for(String word: words )

{
int num = Integer.parseInt(word);

Text outputKey = new Text(word);

IntWritable outputValue = new IntWritable(num);

con.write(outputKey, outputValue);

public static class ReduceForOdd_Even extends Reducer<Text, IntWritable , IntWritable, Text>

public void reduce(Text word, Iterable<IntWritable> values, Context con) throws IOException,
InterruptedException

{
for(IntWritable val : values)
{

int values1 = val.get();

if(values1 % 2 == 0)

{
Text outputValue = new Text("even");
con.write(new IntWritable(values1), outputValue);

}
else
{
Text outputValue = new Text("odd");
con.write(new IntWritable(values1), outputValue);

}
}

}
}
hadoop jar /home/batch_2/Downloads/odd_even.jar /dir2/test.txt /op123
hadoop fs -cat /op123/part-r-00000
2.word count program

import java.io.IOException;

import org.apache.hadoop.conf.Configuration;

import org.apache.hadoop.fs.Path;

import org.apache.hadoop.io.IntWritable;

import org.apache.hadoop.io.LongWritable;

import org.apache.hadoop.io.Text;

import org.apache.hadoop.mapreduce.Job;

import org.apache.hadoop.mapreduce.Mapper;

import org.apache.hadoop.mapreduce.Reducer;

import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;

import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.GenericOptionsParser;

public class WordCount {

public static void main(String [] args) throws Exception

Configuration c=new Configuration();

String[] files=new GenericOptionsParser(c,args).getRemainingArgs();

Path input=new Path(files[0]);

Path output=new Path(files[1]);

Job j=new Job(c,"wordcount");

j.setJarByClass(WordCount.class);

j.setMapperClass(MapForWordCount.class);

j.setReducerClass(ReduceForWordCount.class);

j.setOutputKeyClass(Text.class);
j.setOutputValueClass(IntWritable.class);

FileInputFormat.addInputPath(j, input);

FileOutputFormat.setOutputPath(j, output);

System.exit(j.waitForCompletion(true)?0:1);

public static class MapForWordCount extends Mapper<LongWritable, Text,


Text, IntWritable>{

public void map(LongWritable key, Text value, Context con) throws


IOException, InterruptedException

String line = value.toString();

String[] words=line.split(" ");

for(String word: words )

{
Text outputKey = new Text(word.toUpperCase().trim());

IntWritable outputValue = new IntWritable(1);

con.write(outputKey, outputValue);

public static class ReduceForWordCount extends Reducer<Text, IntWritable,


Text, IntWritable>

public void reduce(Text word, Iterable<IntWritable> values, Context con)


throws IOException, InterruptedException

int sum = 0;

for(IntWritable value : values)


{

sum += value.get();

con.write(word, new IntWritable(sum));

}
3.word length

import java.io.IOException;

import org.apache.hadoop.conf.Configuration;

import org.apache.hadoop.fs.Path;

import org.apache.hadoop.io.IntWritable;

import org.apache.hadoop.io.LongWritable;

import org.apache.hadoop.io.Text;

import org.apache.hadoop.mapreduce.Job;

import org.apache.hadoop.mapreduce.Mapper;

import org.apache.hadoop.mapreduce.Reducer;

import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

import org.apache.hadoop.util.GenericOptionsParser;

public class WordLength{

public static void main(String [] args) throws Exception

Configuration c=new Configuration();

String[] files=new GenericOptionsParser(c,args).getRemainingArgs();

Path input=new Path(files[0]);

Path output=new Path(files[1]);

Job j=new Job(c,"wordlength");

j.setJarByClass(WordLength.class);

j.setMapperClass(MapForWordCount.class);

j.setOutputKeyClass(Text.class);
j.setOutputValueClass(IntWritable.class);

FileInputFormat.addInputPath(j, input);

FileOutputFormat.setOutputPath(j, output);

System.exit(j.waitForCompletion(true)?0:1);

public static class MapForWordCount extends Mapper<LongWritable, Text, Text,


IntWritable>{

public void map(LongWritable key, Text value, Context con) throws IOException,
InterruptedException

String line = value.toString();

String[] words=line.split(" ");

for(String word:words){
if(word.length()>0){

con.write(new Text(word), new IntWritable(word.length()));


}}
}
}
}
PIG
uidai = load '/home/batch2/Downloads/UIDAI-ENR-GEOGRAPHY-20180101.csv' USING
PigStorage(',')as(State:chararray,District:chararray,AadhaarGenerated:int,EnrolmentRejected:int);

filter_data = filter uidai by State == 'West Bengal';


filter_data1 = filter filter_data by AadhaarGenerated > 0;
district = foreach filter_data1 generate TOKENIZE
(District);

dump district;
2.Show cause Notice is to be generated to the districts with rejection greater than 10.
Ans:

uidai = load '/home/batch2/Downloads/UIDAI-ENR-GEOGRAPHY-20180101.csv' USING


PigStorage(',')as(State:chararray,District:chararray,AadhaarGenerated:int,EnrolmentRejected:int);

filter_data1 = filter uidai by EnrolmentRejected > 10;


district = foreach filter_data1 generate TOKENIZE
(District);
dump district;

3.There are two departments X and Y. X handles enrolment and Y handles rejected data. Provide them their data
set of interest.
Ans:

uidai = load '/home/batch2/Downloads/UIDAI-ENR-GEOGRAPHY-20180101.csv' USING


PigStorage(',')as(State:chararray,District:chararray,AadhaarGenerated:int,EnrolmentRejected:int);

Y = foreach uidai generate(State,District,AadhaarGenerated);


dump Y;

X = foreach uidai generate(State,District,EnrolmentRejected);

dump X;

4.Which state bags the first prize for aadhar enrolment.


Ans:

uidai = load '/home/batch2/Downloads/UIDAI-ENR-GEOGRAPHY-20180101.csv' USING


PigStorage(',')as(State:chararray,District:chararray,AadhaarGenerated:int,EnrolmentRejected:int);

ordered = order uidai by AadhaarGenerated DESC;


limit_data = limit ordered 1;
token = foreach limit_data generate (State,District,AadhaarGenerated);
dump token;

uidai = load '/home/batch2/Downloads/UIDAI-ENR-GEOGRAPHY-20180101.csv' USING


PigStorage(',')as(state:chararray,district:chararray,adgn:int,enrj:int);

e = group uidai by state;

f = foreach e generate group as state, SUM(uidai.adgn) as adgn1;


c = order f by adgn1 desc;
limit_data = limit c 1;
dump limit_data;
HIVE_ASSIGNMENT
Topic: Apache Hive - DDL
1. Install Hive on the system using the distribution provided.
2. Create two databases with the names ͚retail_dbb͛ and ͚bank_dbb͛ having comments and
located at /user/hive/mywarehouse/.
Ans:-
create database retail_db comment 'Retail' location '/user/hive/mywarehouse';

create database bank_db comment 'bank' location '/user/hive/mywarehouse';

3. Create a hive managed table ͚employeee͛ in ͚bank_dbb͛ database with the following
structure:
Field Name
EID
Name
Salary
Designation
Data Type
INT
STRING
FLOAT
STRING
Ans:-use bank_db;

create table employee(


> Eid int,
> Name string,
> salary float,
> Designation string)
> row format delimited fields terminated by ',';
4. Create external table ͚new_employeee͛ with the same structure at HDFS location
/public/bank_db
Ans:-> create external table new_employee(
> Eid int,
> Name string,
> salary float,
> Designation string)
> row format delimited fields terminated by ',' location '/public/bank_db'
>;
5. Rename table ͚new_employeee͛ to ͚all_employeee͛ .
Ans:-alter table new_employee rename to all_employee;
6. Apply the following changes to ͚employeee͛ and ͚new_employeee͛ tables:
Field Name
Convert
from New Field Name
New datatype
datatype

EID
INT
Emp_ID
INT
Name
STRING
Emp_Name
STRING
Salary
FLOAT
Salary
DOUBLE
Designation
STRING
Designation
VARCHAR(30)
Ans:-alter table all_employee change Eid Emp_id int;
alter table all_employee change Name Emp_Name string;
alter table all_employee change salary salary double;
alter table all_employee change Designation Designation varchar(30);
7. Add two column to the above table:
Column Name: Dt_of_Joining Data
Type: DATE
Column Name: Phone
Data Type: INT
Ans:- alter table all_employee add columns (
> dt_of_joining_data date,
> phone int)
>;
8. Drop column ͚Salarye͛ from the above table.
Ans:-alter table all_employee replace columns(emp_id int,emp_name string,designation varchar(30));

9. Create table in hive that can hold the following records. Choose appropriate field names
and types.
Ajay, Lumia 1020, Nokia, 10000
Shiva, iphone6, Apple, 34000
Srejeeth, Galaxy 4, Samsung, 20000
Ans:-create table telecom(
> name string,model string,company string,cost float);

insert into telecom values("Ajay","Lumia 1020","Nokia",10000);


insert into telecom values("Ajay","iphone6","Apple",34000);
insert into telecom values("srejeeth","Galaxy 4","samsung",20000);
10. Apply ͚showe͛ DDL command for the databases and tables created.
Ans:-show databases;
show tables;
11. Apply ͚describee͛ DDL command for the databases and tables created.
Ans:- describe telecom;

12. Create all 6 tables listed in figure in hive as managed tables, delimiter is "|", in database
͚ ͚ retail_dbb͛ .
13. Drop tables ͚order_itemse͛ and ͚orderse͛ .
14. Create the above two tables again as external tables at a location different from rest of
the tables.
15. Drop database ͚bank_dbb͛ along with all the tables in it in one command.
Topic: Apache Hive - DML
1. Create database ͚companye͛ located at /user/hive/mywarehouse/.
Ans:-create database company location '/user/hive/mywarehouse'
>;
2. Create a hive managed table partitioned table ͚employeee͛ in ͚companye͛ database which
can store the following data:
1,Anne,Admin,50000,A
2,Gokul,Admin,50000,B
3,Janet,Sales,60000,A
4,Hari,Admin,50000,C
5,Sanker,Admin,50000,C
6,Margaret,Tech,12000,A
7,Nirmal,Tech,12000,B
8,jinju,Engineer,45000,B
9,Nancy,Admin,50000,A
10,Andrew,Manager,40000,A
11,Arun,Manager,40000,B
12,Harish,Sales,60000,B
13,Robert,Manager,40000,A
14,Laura,Engineer,45000,A
15,Anju,Ceo,100000,B
Partition the table using the fourth column.
Ans: create table employee(
> id int,name string,designation string,salary double,grade string);
to load data
load data local inpath '/home/batch_2/Desktop/abc.txt' into table employee;
create table emp_part(
> id int,name string,designation string,grade string)
> partitioned by (salary double)
> row format delimited fields terminated by ',' lines terminated by '\n'
> stored as textfile;
To provide access
set hive.exec.dynamic.partition=true;
set hive.exec.dynamic.partition.mode=nonstrict;
to load partitioned(
insert overwrite table emp_part partition(salary)
> select id ,name ,designation ,grade ,salary from employee;
to see particular salary
hadoop fs -cat /user/hive/mywarehouse/emp_part/salary=50000.0/000000_0

hadoop fs -put /home/batch_2/Desktop/abc.txt /dir


load data inpath '/dir/abc.txt' into table employee;
3. Save the above data in a text file on local file system and HDFS. Load data into
͚ ͚ employeee͛ table from both LOCAL & HDFS filesystem.
Ans:-hadoop fs -put /home/batch_2/Desktop/abc.txt /dir
load data inpath '/dir/abc.txt' into table employee;

4. Add the following records to the text file and load the modified data into ͚employeee͛
table using OVERWRITE.
16,Aarathi,Manager,40000,B
17,Parvathy,Engineer,45000,B
18,Gopika,Admin,50000,B
19,Steven,Engineer,45000,A
20,Michael,Ceo,100000,A
Ans:-load data local inpath '/home/batch_2/Desktop/abc.txt' overwrite into table employee;
5. Create another table ͚new_employeese͛ with the following records:
12,Priyanka,Admin,40000,C
22,Paras,Engineer,45000,B
23,Gopal,Sales,50000,C24,Sukant,Engineer,45000,A
25,Murugan,CFO,100000,A
Append these records into ͚employeese͛ table using INSERT.
Ans:-> create table new_employee(
> id int,name string,designation string,salary double,grade string)
> row format delimited fields terminated by ',';
load data local inpath '/home/batch_2/Desktop/bcd.txt' into table new_employee;
insert into employee
> select * from new_employee;

6. List the employees having salary>50000.


Ans:- select *from employee where salary > 50000;
7. Select the list of employees whose names start from ͚Ab͛.
Ans:- select name from employee where name like 'A%';
8. List the number of employees for each Designation.
Ans:-select count(*) from employee group by designation;
9. Order the list of employees according to their names.
Ans:-select *from employee order by name;

10. Partition the employees table based upon the salary.


Ans:-create table emp_part(
> id int,name string,designation string,grade string)
> partitioned by (salary double)
> row format delimited fields terminated by ',' lines terminated by '\n'
> stored as textfile;
11. List to top 5 highly paid employees.
Ans:-
12. List all admins whose salary is < 45000.
Ans:- select * from employee where designation = 'admin' and salary <45000;

13. Compute the average salary paid by the company to its employees.
Ans:-select avg(salary) from employee;

14. Compute the total salary paid by the company per month.
Ans:-select sum(salary) from employee;

15. List the employee with the highest salary. List the employee with the minimum salary.
Ans:-select *from employee order by salary desc limit 1;
16. Display DISTINCT salaries paid by the company.
Ans:- select distinct(salary) from employee;
17. List the employees with increasing order of salary paid.
18. Make two partitions of the table with the following criteria:
a. Partition 1 consisting of employees having salary <=50000
b. Partition 2 consisting of employees having salary > 50000
19. List the employees having salary between 50000 and 60000 in partition 2.
20. Create partition 3 for all Engineer having salary >=45000.

You might also like