Archives For linux

head over to the blog of dbi services to read the full article:

Linux quick tip – What is the local time in Kolkata?

head over to the blog of dbi services to read the full article:

Upgrading the Grid Infrastructure from 12.1.0.1 to 12.1.0.2 on the command line

head over to the blog of dbi services to read the full article:

Linux Magic System Request Key Hacks

head over to the blog of dbi services to read the full article:

tmux – an alternative to screen

lets say you have a diskgroup called “dummy”:

SYS@+ASM> select GROUP_NUMBER,NAME,STATE from v$asm_diskgroup where name = 'DUMMY';

GROUP_NUMBER NAME			    STATE
------------ ------------------------------ -----------
	   3 DUMMY			    MOUNTED

currently the diskgroup contains once device:

SYS@+ASM> select name,path,header_status from v$asm_disk where group_number = 3;

NAME	   PATH 		HEADER_STATU
---------- -------------------- ------------
DUMMY_0000 /dev/sdg1		MEMBER

some time in the future the diskgroup runs out of space and you request another device from the storage or os team. once the device is ready you check if you can see it in ASM:

SYS@+ASM> select name,path from v$asm_disk where header_status = 'CANDIDATE';

NAME	   PATH
---------- --------------------
	   /dev/sdh1
	   /dev/sdh

perfect, a new device is available to extend the diskgroup:

SYS@+ASM> alter diskgroup dummy add disk '/dev/sdh1';

Diskgroup altered.

time goes on and the diskgroup runs out of space again. another dba checks if there are decives available to add:

SYS@+ASM> select name,path from v$asm_disk where header_status = 'CANDIDATE';

NAME	   PATH
---------- --------------------
	   /dev/sdh

cool, no need to request another device:

SYS@+ASM> alter diskgroup dummy add disk '/dev/sdh';

Diskgroup altered.

and bumm:

Errors in file .../admin/diag/asm/+asm/+ASM/trace/+ASM_arb0_2432.trc:
ORA-15130: diskgroup "" is being dismounted
ORA-15335: ASM metadata corruption detected in disk group 'DUMMY'
ORA-15130: diskgroup "DUMMY" is being dismounted
ORA-15066: offlining disk "DUMMY_0002" in group "DUMMY" may result in a data loss
ORA-15196: invalid ASM block header [kfc.c:29297] [endian_kfbh] [2147483650] [10] [0 != 1]
ORA-15196: invalid ASM block header [kfc.c:29297] [endian_kfbh] [2147483650] [10] [0 != 1]
ORA-15196: invalid ASM block header [kfc.c:29297] [endian_kfbh] [2147483650] [10] [0 != 1]
ORA-15196: invalid ASM block header [kfc.c:29297] [endian_kfbh] [2147483650] [10] [0 != 1]

What happened?: Two different people did not look exactly what they are doing

Mistake one: the system in question used udev rules for device persistence. when the os people adjusted the udev rules for the new device they did not exclude the whole disk:

KERNEL=="sdh*",  OWNER="asmadmin", GROUP="asmdba", MODE="0660"

When this rule is applied ASM will see all the partitions of the disk as well as the disk itself:

/dev/sdh1
/dev/sdh

Mistake two: the dba which added the last disk did not recognize that the candidate:

SYS@+ASM> select name,path from v$asm_disk where header_status = 'CANDIDATE';

NAME	   PATH
---------- --------------------
	   /dev/sdh

..actually is the whole disk. so the whole disk was added to ASM after the partition (which spanned the whole disk) was added.

so, better look twice …

this is the last very short post of this series:

Create your own Oracle Staging Environment, Part 1: The Staging Server
Create your own Oracle Staging Environment, Part 2: Kickstart File(s) and Profiles
Create your own Oracle Staging Environment, Part 3: Setup the lab environment
Create your own Oracle Staging Environment, Part 4: Staging the GI

at the end of the last post the GI was up and running. from this point onwards it would be easy to use dbca or scripts to setup a RAC database. but we can even automate this. updated script:

finish_gi_setup_rac.sh

save this under /var/www/html/finish_gi_setup.sh on the cobbler vm and fire up the vms. the result is:

crsctl stat res -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.CRS_DG.dg
               ONLINE  ONLINE       mylabp1vm1                                   
               ONLINE  ONLINE       mylabp1vm2                                   
ora.DATA.dg
               ONLINE  ONLINE       mylabp1vm1                                   
               ONLINE  ONLINE       mylabp1vm2                                   
ora.FRA.dg
               ONLINE  ONLINE       mylabp1vm1                                   
               ONLINE  ONLINE       mylabp1vm2                                   
ora.LISTENER.lsnr
               ONLINE  ONLINE       mylabp1vm1                                   
               ONLINE  ONLINE       mylabp1vm2                                   
ora.LISTENER_ASM.lsnr
               ONLINE  ONLINE       mylabp1vm1                                   
               ONLINE  ONLINE       mylabp1vm2                                   
ora.asm
               ONLINE  ONLINE       mylabp1vm1               Started             
               ONLINE  ONLINE       mylabp1vm2               Started             
ora.gsd
               OFFLINE OFFLINE      mylabp1vm1                                   
               OFFLINE OFFLINE      mylabp1vm2                                   
ora.net1.network
               ONLINE  ONLINE       mylabp1vm1                                   
               ONLINE  ONLINE       mylabp1vm2                                   
ora.ons
               ONLINE  ONLINE       mylabp1vm1                                   
               ONLINE  ONLINE       mylabp1vm2                                   
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       mylabp1vm2                                   
ora.cvu
      1        ONLINE  ONLINE       mylabp1vm2                                   
ora.mylabp1vm1.vip
      1        ONLINE  ONLINE       mylabp1vm1                                   
ora.mylabp1vm2.vip
      1        ONLINE  ONLINE       mylabp1vm2                                   
ora.oc4j
      1        ONLINE  ONLINE       mylabp1vm2                                   
ora.racdb.db
      1        ONLINE  ONLINE       mylabp1vm2               Open                
      2        ONLINE  ONLINE       mylabp1vm1               Open                
ora.scan1.vip
      1        ONLINE  ONLINE       mylabp1vm2                         

Lets continue this series of posts:

Create your own Oracle Staging Environment, Part 1: The Staging Server
Create your own Oracle Staging Environment, Part 2: Kickstart File(s) and Profiles
Create your own Oracle Staging Environment, Part 3: Setup the lab environment

In Part 3 we had two (or more) VMs up and running with the following specs:

  • eth0 configured as the public network
  • eth1 configured as the private interconnect
  • three shared storage devices for the ASM disks
  • oracle users and groups
  • a sample source file deployed on all Vms

this might be sufficiant for starting a lab if you want to demonstrate the installation of a grid infratructure. but we can even go further and automate the setup of the GI. what else would be required?

  • the source file for the grid infrastructure (stored in /var/www/html/sources). in this case the 11.2.0.4 files downloaded from mos
  • kernel parameters
  • partitions on the disks
  • limits for the oracle users
  • setting the io scheduler
  • the cluster configuration file
  • the gi software installation
  • password-less ssh connectivity for the grid user

here are the updated scripts (as pdfs to reduce the length of this post):

place these on the cobbler server in same locations as in the previous posts, then:

./create_gi_rac_lab.sh mylab 2

fetch one of the vm setup scripts:

ls -la /var/www/html/mylab/setup*.sh
-rw-r--r--. 1 root root 3821 Jan 11 09:32 /var/www/html/mylab/setup_p_1.sh
-rw-r--r--. 1 root root 3821 Jan 11 09:32 /var/www/html/mylab/setup_p_2.sh

… and fire up the VMs on your workstation.

give each vm some minutes before starting the next one. the reason is that the last vm will do the configuration of the GI and therefore the other vms should be up and running when the configuration starts. once the last one completed you have the GI up and running on all the nodes:

[root@mylabp1vm1 ~]# /opt/oracle/product/crs/11.2.0.4/bin/crsctl stat res -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.CRS_DG.dg
               ONLINE  ONLINE       mylabp1vm1                                   
               ONLINE  ONLINE       mylabp1vm2                                   
ora.asm
               ONLINE  ONLINE       mylabp1vm1               Started             
               ONLINE  ONLINE       mylabp1vm2               Started             
ora.gsd
               OFFLINE OFFLINE      mylabp1vm1                                   
               OFFLINE OFFLINE      mylabp1vm2                                   
ora.net1.network
               ONLINE  ONLINE       mylabp1vm1                                   
               ONLINE  ONLINE       mylabp1vm2                                   
ora.ons
               ONLINE  ONLINE       mylabp1vm1                                   
               ONLINE  ONLINE       mylabp1vm2                                   
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       mylabp1vm2                                   
ora.cvu
      1        ONLINE  ONLINE       mylabp1vm1                                   
ora.mylabp1vm1.vip
      1        ONLINE  ONLINE       mylabp1vm1                                   
ora.mylabp1vm2.vip
      1        ONLINE  ONLINE       mylabp1vm2                                   
ora.oc4j
      1        ONLINE  ONLINE       mylabp1vm1                                   
ora.scan1.vip
      1        ONLINE  ONLINE       mylabp1vm2                                   

there still is much room for improvement, but for now it at least works … takes around 20 minutes on my workstation until both nodes are up and configured.