head over to the blog of dbi services to read the full article:
No journal messages available before the last reboot of your CentOS/RHEL system?
head over to the blog of dbi services to read the full article:
No journal messages available before the last reboot of your CentOS/RHEL system?
head over to the blog of dbi services to read the full article:
head over to the blog of dbi services to read the full article:
head over to the blog of dbi services to read the full article:
Upgrading the Grid Infrastructure from 12.1.0.1 to 12.1.0.2 on the command line
head over to the blog of dbi services to read the full article:
head over to the blog of dbi services to read the full article:
lets say you have a diskgroup called “dummy”:
SYS@+ASM> select GROUP_NUMBER,NAME,STATE from v$asm_diskgroup where name = 'DUMMY'; GROUP_NUMBER NAME STATE ------------ ------------------------------ ----------- 3 DUMMY MOUNTED
currently the diskgroup contains once device:
SYS@+ASM> select name,path,header_status from v$asm_disk where group_number = 3; NAME PATH HEADER_STATU ---------- -------------------- ------------ DUMMY_0000 /dev/sdg1 MEMBER
some time in the future the diskgroup runs out of space and you request another device from the storage or os team. once the device is ready you check if you can see it in ASM:
SYS@+ASM> select name,path from v$asm_disk where header_status = 'CANDIDATE'; NAME PATH ---------- -------------------- /dev/sdh1 /dev/sdh
perfect, a new device is available to extend the diskgroup:
SYS@+ASM> alter diskgroup dummy add disk '/dev/sdh1'; Diskgroup altered.
time goes on and the diskgroup runs out of space again. another dba checks if there are decives available to add:
SYS@+ASM> select name,path from v$asm_disk where header_status = 'CANDIDATE'; NAME PATH ---------- -------------------- /dev/sdh
cool, no need to request another device:
SYS@+ASM> alter diskgroup dummy add disk '/dev/sdh'; Diskgroup altered.
and bumm:
Errors in file .../admin/diag/asm/+asm/+ASM/trace/+ASM_arb0_2432.trc: ORA-15130: diskgroup "" is being dismounted ORA-15335: ASM metadata corruption detected in disk group 'DUMMY' ORA-15130: diskgroup "DUMMY" is being dismounted ORA-15066: offlining disk "DUMMY_0002" in group "DUMMY" may result in a data loss ORA-15196: invalid ASM block header [kfc.c:29297] [endian_kfbh] [2147483650] [10] [0 != 1] ORA-15196: invalid ASM block header [kfc.c:29297] [endian_kfbh] [2147483650] [10] [0 != 1] ORA-15196: invalid ASM block header [kfc.c:29297] [endian_kfbh] [2147483650] [10] [0 != 1] ORA-15196: invalid ASM block header [kfc.c:29297] [endian_kfbh] [2147483650] [10] [0 != 1]
What happened?: Two different people did not look exactly what they are doing
Mistake one: the system in question used udev rules for device persistence. when the os people adjusted the udev rules for the new device they did not exclude the whole disk:
KERNEL=="sdh*", OWNER="asmadmin", GROUP="asmdba", MODE="0660"
When this rule is applied ASM will see all the partitions of the disk as well as the disk itself:
/dev/sdh1 /dev/sdh
Mistake two: the dba which added the last disk did not recognize that the candidate:
SYS@+ASM> select name,path from v$asm_disk where header_status = 'CANDIDATE'; NAME PATH ---------- -------------------- /dev/sdh
..actually is the whole disk. so the whole disk was added to ASM after the partition (which spanned the whole disk) was added.
so, better look twice …
this is the last very short post of this series:
Create your own Oracle Staging Environment, Part 1: The Staging Server
Create your own Oracle Staging Environment, Part 2: Kickstart File(s) and Profiles
Create your own Oracle Staging Environment, Part 3: Setup the lab environment
Create your own Oracle Staging Environment, Part 4: Staging the GI
at the end of the last post the GI was up and running. from this point onwards it would be easy to use dbca or scripts to setup a RAC database. but we can even automate this. updated script:
save this under /var/www/html/finish_gi_setup.sh on the cobbler vm and fire up the vms. the result is:
crsctl stat res -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.CRS_DG.dg
ONLINE ONLINE mylabp1vm1
ONLINE ONLINE mylabp1vm2
ora.DATA.dg
ONLINE ONLINE mylabp1vm1
ONLINE ONLINE mylabp1vm2
ora.FRA.dg
ONLINE ONLINE mylabp1vm1
ONLINE ONLINE mylabp1vm2
ora.LISTENER.lsnr
ONLINE ONLINE mylabp1vm1
ONLINE ONLINE mylabp1vm2
ora.LISTENER_ASM.lsnr
ONLINE ONLINE mylabp1vm1
ONLINE ONLINE mylabp1vm2
ora.asm
ONLINE ONLINE mylabp1vm1 Started
ONLINE ONLINE mylabp1vm2 Started
ora.gsd
OFFLINE OFFLINE mylabp1vm1
OFFLINE OFFLINE mylabp1vm2
ora.net1.network
ONLINE ONLINE mylabp1vm1
ONLINE ONLINE mylabp1vm2
ora.ons
ONLINE ONLINE mylabp1vm1
ONLINE ONLINE mylabp1vm2
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE mylabp1vm2
ora.cvu
1 ONLINE ONLINE mylabp1vm2
ora.mylabp1vm1.vip
1 ONLINE ONLINE mylabp1vm1
ora.mylabp1vm2.vip
1 ONLINE ONLINE mylabp1vm2
ora.oc4j
1 ONLINE ONLINE mylabp1vm2
ora.racdb.db
1 ONLINE ONLINE mylabp1vm2 Open
2 ONLINE ONLINE mylabp1vm1 Open
ora.scan1.vip
1 ONLINE ONLINE mylabp1vm2
Lets continue this series of posts:
Create your own Oracle Staging Environment, Part 1: The Staging Server
Create your own Oracle Staging Environment, Part 2: Kickstart File(s) and Profiles
Create your own Oracle Staging Environment, Part 3: Setup the lab environment
In Part 3 we had two (or more) VMs up and running with the following specs:
this might be sufficiant for starting a lab if you want to demonstrate the installation of a grid infratructure. but we can even go further and automate the setup of the GI. what else would be required?
here are the updated scripts (as pdfs to reduce the length of this post):
place these on the cobbler server in same locations as in the previous posts, then:
./create_gi_rac_lab.sh mylab 2
fetch one of the vm setup scripts:
ls -la /var/www/html/mylab/setup*.sh -rw-r--r--. 1 root root 3821 Jan 11 09:32 /var/www/html/mylab/setup_p_1.sh -rw-r--r--. 1 root root 3821 Jan 11 09:32 /var/www/html/mylab/setup_p_2.sh
… and fire up the VMs on your workstation.
give each vm some minutes before starting the next one. the reason is that the last vm will do the configuration of the GI and therefore the other vms should be up and running when the configuration starts. once the last one completed you have the GI up and running on all the nodes:
[root@mylabp1vm1 ~]# /opt/oracle/product/crs/11.2.0.4/bin/crsctl stat res -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.CRS_DG.dg
ONLINE ONLINE mylabp1vm1
ONLINE ONLINE mylabp1vm2
ora.asm
ONLINE ONLINE mylabp1vm1 Started
ONLINE ONLINE mylabp1vm2 Started
ora.gsd
OFFLINE OFFLINE mylabp1vm1
OFFLINE OFFLINE mylabp1vm2
ora.net1.network
ONLINE ONLINE mylabp1vm1
ONLINE ONLINE mylabp1vm2
ora.ons
ONLINE ONLINE mylabp1vm1
ONLINE ONLINE mylabp1vm2
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE mylabp1vm2
ora.cvu
1 ONLINE ONLINE mylabp1vm1
ora.mylabp1vm1.vip
1 ONLINE ONLINE mylabp1vm1
ora.mylabp1vm2.vip
1 ONLINE ONLINE mylabp1vm2
ora.oc4j
1 ONLINE ONLINE mylabp1vm1
ora.scan1.vip
1 ONLINE ONLINE mylabp1vm2
there still is much room for improvement, but for now it at least works … takes around 20 minutes on my workstation until both nodes are up and configured.
recently we faced the above issue in the asm alertlog on a linux box. it turned out that this always was reported when a select againt v$asm_disk or v$asm_diskgroup was executed. although this seemed somewhat nasty ASM always was up and running, no issues at all.
stracing a sqlplus session which executes one of the selects gave the right hints. the issue was this:
SYS@+ASM> show parameter string NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ asm_diskstring string /dev/*
This forces ASM to scan all devices in /dev for possible asm disks where ASM has access to (inluding all the devices which are not disks at all ). If one of these devices has a size which is not a multiple of the asm sector size this happens. So the fix is easy ( /dev/sdc1 and /dev/sdd1 are the ASM decives in this case ):
alter system set asm_diskstring='/dev/sd*' scope=both;
or
alter system set asm_diskstring='/dev/sd*1' scope=both;
or
alter system set asm_diskstring='/dev/sdc1','/dev/sdd1' scope=both;
Setting the asm_diskstring as close to your asm decives as you can is a best practice anyway and can recude discovery and mount times.