You are on page 1of 11

RAC ORACLE INSTALLATION

Automatic Storage Management: Types of Files: 1. Regular files on File system 2. OMF files 3. files on Raw devices 4. ASM files 1. Regular files system: arrangement of file system effective utilization. Applications Database File System OS HARDWARE ACCESSING DATABASE

When we are accessing database from file system if the buffer size is low then we will having low performance. OMF: Oracle managed files. Parameters: 1. db_create_file_dest database files storage location. 2. db_create_online_log_dest_1 redo log files. RAW DEVICES: It will skip the OS buffer. 1. 2. 3. 4. 5. File system check will not be done. Only one file we can store at raw devices We cant increase the raw device sizes. We can only create 16 raw devices. Linux commands cant work. But dd works.

Note: for baking of the files in raw devices we cant use normal Unix commands like cp, tar.. We can use dd. This takes the backup of entire raw device irrespective of file system. 6. We cant calculate I/O statistics on raw devices.

Applications Database File System OS HARDWARE

LVM: In order to use the raw devices we can configure LVM. 1. VERITAS volume manager third party tool. 2. Solaris volume manager O/S Implementing RAID Redundant array of Disks RAID 0 striping, RAID 1 mirroring, RAID 5 mirroring with parity RAID 0: we cant have good backup. RAID 1: mirroringavailability RAID 5: mirroring with parityperformance plus availability. Oracle 10G introduces ASM similar to LVM. ASM: convenience of OMF plus performance of raw devices. It will give stripping, availability and mirroring. ASM provides performance through striping availability, through mirroring. ASM Instance 60 to 120 Mb It contains few background processes Redundancy levels: 1. External redundancy 2. Normal redundancy 3. High redundancy. Disk Group: combination of raw devices. The number of failover groups gives the redundancy. External redundancy: one failover group. Normal redundancy: two failover groups. High redundancy: three failover groups. Two types of stripe units are there 1. Fine(128k) control files/redo log files. 2. Coarse(Allocation unit)database files.

When we add one more raw device to existing disk group oracle will do automatic rebalancing of extents. we can have 63 disk groups and one million ASM files. We can add the raw devices as well as we can drop them. ASM INSTANCE PARAMETERS: 1. 2. 3. 4. 5. 6. 7. Instance_name=+ASM Instance_type=asm Asm_diskgroups Asm_diskstrigs Large_pool_size Shared_pool_size Asm_power_limit (default 1)

When we are adding raw devices this asm_power_limit is useful. In 11G SYSASM is the user to connect to ASM instance. Mandatory background process: ASMB RBAL 10G Mirroring will be done at extend level not at disk level. ARBN GMON KATE PSPO 11G PZ9X Note: ASM instance is a gateway between ASM disk groups and ASM client databases. ASMB: ASM master rebalance process is responsible for coordinating the rebalance operation. It also analyze the plan for well load balancing when new disk is added to existing disk group. ASMB runs in RDBMS database instances and connect to foreground process of ASM instances. ARBN: ASM slave rebalance process it is responsible for moving extent for one disk to another disk within a disk group. RBAL: RBAL in a database instance performs opening of the disk in the disk group. GMON: group monitor is responsible for certain disk group monitoring operations that maintain ASM metadata inside disk groups. KATE: conductor of ASM is responsible for making disk groups online. PSPO: process spanner is responsible for creating all background process. PZ9X: is responsible for selecting of v$ views.

11G enhancements in disk group: we can use 140pb(Peta bytes) asm file in external redundancy disk group. 42pb for normal redundancy. 15pb for high redundancy. We can have ten thousand ASM disks for disk groups. 63 disk groups for ASM instances. ASMB, ARBN responsible for communicating with ASM instances by the databases. Header status: To check whether the raw device is used for creating a disk group or not Candidate ready to be used Member already part of ASM DG Former previous part of ASM DG, currently not these must be made as candidate to make Foreign previous part of ASM DG, currently not Use for disk groups. Making raw devices as candidate: #mk2tfs j /dev/sdb10 Startup Procedure: 1. 2. 3. 4. Start ASM instance Mount disk group Startup database Reverse order when we shutdown.

Oracle Software Installation: #crsctl check crs #crs_stat t Repeat it on node2. $cd database $./runInstaller nextnextnextclick on select All nextnext install software only next install. On node 2 $cd $ORACLE_HOME $watch ls Pfile Management: PFILE = static, SPFILE = dynamic 9i onwards. Global pfile = when we are setting up RAC. Global spfile Local pfile1 Local pfile2

GLOBAL PFILE-- $ORACLE_HOME/dbs

GLOBAL SPFILE-- /DEV/RAW/RAW7 SHARED STORAGE

NODE1 LOCAL PPILE1 $O_H/dbs

NODE2 LOCAL PPILE2 $O_H/dbs

SPFILE: /home/oracle/ASM/spfile+ASM.ora=spfile Note: 1. Since all the nodes must access same parameter file for all the instances we need to have spfile which is shared. 2. So for creating global spfile we need to have pfile. This global can be created in dbs directory on any one node. 3. Since the global spfile must be shared it has to be kept/created on raw device of shared storage, but a file cant be created directly on a raw device so we use links to perform this operation. If a link is created for a source file having data to a raw device, the link will be created but the data will not be copied to the raw devices. Whatever the data we had to the source file after we creation of link so that data will now be copied to the raw devices So a link for a global spfile( which is not existing) will be create to a raw device in a specific directory. Ex: $cd /home/oracle/ASM $ln s /dev/raw/raw7 spfile+ASM.ora Now the physical global spfile will be created specifying the location of the link with the help of global pfile. Ex: sql>create spfile=/home/oracle/ASM/spfile+ASM.ora from pfile; Now for starting of the instances in the nodes we need to have the local pfiles. This contains only two parameters 1. Instance number, spfile.

RDBMS Instance:

Global pfile $O_H/dbs

Global spfile Normal diskgroup Node2 Local pfile1(Node1) Instance_number=1 Local pfile2(Node2) instance_number=2

Whenever any parameter value in the spfile is modified that will be effective for all the instances accessing that file, so to make a value effective to a specific instance we use an option called SID after scope Instance=local pfile. RAC specific parameters: Cluster_database=true or false Cluster_database_instances=? [value=20 note: 10G=100,11G=100+] Redo Log File management: Latest transactional information will be written to redo log files. Note: if we have only one redo log thread for multiple instances, all instances will be using the same redo log for their transactions, so if any instance gets crashed the other instance will continue their usage of redo logs with log switches and generation of archive logs, now if the crashed instance is starting up, it requires instance crash recovery and is not possible because the data got overwritten in the redos by other instances. GRD recovery will be done by lmon. Note: whenever any instance is to be added for database we need to create redo log thread and tablespace for this instance. Creating ASM instance and diskgroup: #/crs/oracle/bin/crsctl check crs #/home/oracle/bin/crs_stat t $mkdir ASM $cd $ORACLE_HOME/dbs $vi init+ASM.ora
cluster_database=true

instance_type=asm asm_diskstring='/dev/sdb*' diagnostic_dest=/home/oracle/ASM large_pool_size=12m remote_login_passwordfile=shared +ASM1.instance_number=1 +ASM2.instance_number=2 :wq $cd /home/oracle/ASM $ln s /dev/raw/raw7 spfile+ASM.ora $sqlplus / as sysasm (new user) >create spfile=/home/oracle/ASM/spfile+ASM.ora from pfile; Creating ASM local Instances: $export ORACLE_SID=+ASM1 $vi init+ASM1.ora Instance_number=1 Spfile=/home/oracle/ASM/spfile+ASM.ora $sqlplus / as sysasm >startup nomount; >select name,path,header_status from v$asm_disk; #mke2fs -j /dev/sdbx #for converting member/foreign to candidate. X=8,9,10,11,12,13 >create diskgroup extdg external redundancy disk /dev/sdb8,/dev/sdb9; >create diskgroup nordg normal redundancy failgroup f1 disk /dev/sdb10,/dev/sdb11 failgroup f2 disk /dev/sdb12,/dev/sdb13; >select name,path,header_status from v$asm_disk; >alter diskgroup extdg mount; We will get error like already mounted. v$instance, gv$instance g=global. On node2: Adding +ASM2 instance on node2: $mkdir ASM $cd ASM $ln s /dev/raw/raw7 spfile+ASM.ora $ls l $export ORACLE_SID=+ASM2 $cd $ORACLE_HOME/dbs $vi init+ASM2.ora Instance_number=2 Spfile=/home/oracle/ASM/spfile+ASM.ora

$sqlplus / as sysasm >Startup nomount; # for the first time we have to mount the disk group after we dont need to do it. >alter diskgroup nordg mount; >alter diskgroup extdg mount; >select instance_number,instance_name,status from v$instance; >select instance_number,instance_name,status from gv$instance; >exit $asmcmd >ls >exit Creating RDBMS instances & databases: On Node1: $export ORACLE_SID=DBRAC $cd $ORACLE_HOME/dbs $vi init+DBRAC.ora cluster_database=false cluster_database_instances=5 compatible=11.1.0 control_files='+nordg/dbrac/control.ctl' db_name=DBRAC db_domain=rp.com db_files=50 global_names=true job_queue_processes=3 log_checkpoint_interval=10000 undo_management=auto shared_pool_size=120m open_cursors=50 processes=50 dbrac1.instance_number=1 dbrac2.instance_number=2 dbrac1.thread=1 dbrac2.thread=2 dbrac1.instance_name=dbrac1 dbrac2.instance_name=dbrac2 dbrac1.undo_tablespace=undo1 dbrac2.undo_tablespace=undo2 diagnostic_dest=/home/oracle/DBRAC #log_archive_dest='+disk1/dbrac/ARCH' remote_login_passwordfile=exclusive

:wq $cd $mkdir DBRAC $sqlplus / as sysdba >create spfile=+nordg/spfileDBRAC.ora from pfile; >exit $export ORACLE_SID=dbrac1 $cd $ORACLE_HOME/dbs $vi initdbrac1.ora Instance_number=1 Spfile=+nordg/spfileDBRAC.ora :wq $sqlplus / as sysdba >startup nomount; >exit $vi cr8db.sql create database DBRAC maxinstances 5 datafile '+nordg/dbrac/system01.dbf' size 200m autoextend on sysaux datafile '+nordg/dbrac/sysaux.dbf' size 150m autoextend on undo tablespace undo1 datafile '+nordg/dbrac/undo01.dbf' size 50m default temporary tablespace temp tempfile '+nordg/dbrac/temp01.dbf' size 50m default tablespace userdata datafile '+nordg/dbrac/userdata.dbf' size 100m logfile group 1 '+extdg/dbrac/thread1_redo1a.log' size 4m, group 2 '+extdg/dbrac/thread1_redo2a.log' size 4m controlfile reuse;
:wq

$vi run.sql @$ORACLE_HOME/rdbms/admin/catalog.sql @$ORACLE_HOME/rdbms/admin/catproc.sql @$ORACLE_HOME/rdbms/admin/catclustdb.sql conn system/manager @?/sqlplus/admin/pupbld.sql :wq $!sql >@cr8db.sql; >run.sql; >select count(*) from tab; $export ORACLE_SID=+ASM1; $asmcmd

>cd extdg >mkdir archive $export ORACLE_SID=dbrac1 $sqlplus / as sysdba >archive log list; >alter system set log_archive_dest_1=Location=+extdg/arch; >alter system set db_recovery_file_dest_size=2G; >alter system set db_recovery_file_dest=+extdg; >Shut immediate; >startup mount; >alter database archivelog; >alter database flashback on; >alter database open; >alter database add logfile thread 2 group 3 +extdg/redo3.log size 4m, group 4 +extdg/redo4.log size 4m; >alter database enable public thread 2; Thread combination of two groups. >create undo tablespace undo2 datafile +nordg/dbrac/undo2.dbf size 50m; >show parameter cluster >alter system set cluster_database=true scope=spfile; >shut immediate; >startup; >select instance_name,instance_number,status from gv$instance; On node2: Adding RDBMS Instance: $export ORACLE_SID=dbrac2 $vi $ORACLE_HOME/dbs/initdbrac2.ora Instance_number=2 Spfile=nordg/spfileDBRAC.ora $sqlplus / as sysdba >exit $mkdir DBRAC $!sql >startup >select instance_name, instance_number, status from gv$instance; Shutdown process: 1. Shutdown both instance node1 and node 2 2. Shutdown both the +ASM instances. Adding a node[for load balancing]: As a root user: 1. Configure networking. 2. Update /etc/hosts. 3. Configure shared storage.

4. Mapping raw devices. 5. Create user, groups & permissions. 6. Install O/S RPMS. 7. Install cluster RPM. 8. Update kernel parameters. As an Oracle User: 1. Update bash profile. 2. Configure ssh. 3. Cluvfy check. 4. Add cluster s/w from existing node. 5. Add oracle s/w from existing node 6. Add the services ( listener, asm instance, rdbms instance) 7. Register the services into ocr. Cluvfy check: $cd /crs/home/ $./runcluvfy.sh comp nodecon n rac1,rac2,rac9 $./runcluvfy.sh comp ssa n rac1,rac2,rac9 $./runcluvfy.sh stage pre crsint n rac1,rac2,rac9 Adding cluster software: #xhost + $cd /crs/oracle/oui/bin $./addnode.sh Nextgive public hostname rac9stop the vip from new node #ifconfig eth1:0 down click on nextclick on install run the scripts according to the sequenceokfinish. #/crs/oracle/bin/crsctl check crs #/crs/oracle/bin/crs_stat t Adding oracle s/w on any of the existing node: $cd $ORACLE_HOME/oui/bin $./addnode.sh nextnextinstall. Deleting node: 1. Stop the services 2. Remove the services from crs. 3. Remove the node apps from crs. 4. Update $O_H & $C_H.

You might also like