OS ready, Grid Infrastructure ready, patched to the latest release. Time to start with the “real” things…
Thanks to the virtualization technology ( Virtual Box ) in this case, it is easy to add storage to you machine. If running, shut down your Virtual Machine and add two disks to it.
From your workstation, execute:
vboxmanage storagectl oracleplayground --name "SCSI Controller" --add scsi --controller LSILogic vboxmanage createhd --filename ~/VirtualBox\ VMs/oracleplayground/oracleplayground_asm1.vdi --size=10240 --variant Fixed vboxmanage createhd --filename ~/VirtualBox\ VMs/oracleplayground/oracleplayground_asm2.vdi --size=10240 --variant Fixed vboxmanage storageattach oracleplayground --storagectl "SCSI Controller" --port 1 --device 0 --type hdd --medium ~/VirtualBox\ VMs/oracleplayground/oracleplayground_asm1.vdi vboxmanage storageattach oracleplayground --storagectl "SCSI Controller" --port 2 --device 0 --type hdd --medium ~/VirtualBox\ VMs/oracleplayground/oracleplayground_asm2.vdi
What happened: We created a new SCSI Controller for our Virtual Machine, created two 10gb hard disks and attached them to the Controller.
Startup your Virtual Machine and check the devices:
su - ls -la /dev/sd* brw-r----- 1 root disk 8, 0 Mar 23 13:02 /dev/sda brw-r----- 1 root disk 8, 16 Mar 23 13:02 /dev/sdb
These are our newly created disks. What we will need for ASM is one partition for each disk, so lets create the partitions:
su - fdisk /dev/sda fdisk /dev/sdb
The commands for creating the partitions are ( in this sequence ): p n p 1 enter enter w quit
If we check the devices again, we now can see the newly created partitions:
ls -la /dev/sd* brw-r----- 1 root disk 8, 0 Mar 23 13:06 /dev/sda brw-r----- 1 root disk 8, 1 Mar 23 13:06 /dev/sda1 brw-r----- 1 root disk 8, 16 Mar 23 13:09 /dev/sdb brw-r----- 1 root disk 8, 17 Mar 23 13:09 /dev/sdb1
To be usable to asm, the disks must belong to the grid user. We could change the permissions now with chmod, but because we want this to be persistent across reboots we create udev rules for it:
su - echo "KERNEL==\"sd[a-b][1-9]*\", OWNER=\"grid\", GROUP=\"asmadmin\", MODE=\"0660\"" > /etc/udev/rules.d/61-asm.rules reboot
Once the Virtual Machine rebooted, check the devices again and you will see that the owner,group and permissions changed:
su - ls -la /dev/sd* brw-r----- 1 root disk 8, 0 Mar 23 13:17 /dev/sda brw-rw---- 1 grid asmadmin 8, 1 Mar 23 13:17 /dev/sda1 brw-r----- 1 root disk 8, 16 Mar 23 13:17 /dev/sdb brw-rw---- 1 grid asmadmin 8, 17 Mar 23 13:17 /dev/sdb1
Now we are prepared for creating our ASM instance. Because we are lazy, lets define two variables for the init.ora and the spfile:
su - grid INITORA=$ORACLE_HOME/dbs/init+ASM.ora SPFILE=$ORACLE_HOME/dbs/spfile+ASM.ora
Create a minimal set of parameters:
echo "instance_type='asm' memory_target=128m memory_max_target=128m " > ${INITORA}
… and start the cssd resource ( which is the cluster synchronization service, required by ASM ):
crsctl start resource ora.cssd
You should see the following output:
CRS-2672: Attempting to start 'ora.cssd' on 'oracleplayground' CRS-2672: Attempting to start 'ora.diskmon' on 'oracleplayground' CRS-2676: Start of 'ora.diskmon' on 'oracleplayground' succeeded CRS-2676: Start of 'ora.cssd' on 'oracleplayground' succeeded
Let’s try if we can startup and shutdown our ASM instance:
sqlplus /nolog <<EOF connect / as sysasm startup force nomount pfile=${INITORA}; shutdown abort; exit; EOF
The output should look like:
SQL*Plus: Release 11.2.0.3.0 Production on Fri Mar 23 13:28:29 2012 Copyright (c) 1982, 2011, Oracle. All rights reserved. SQL> Connected to an idle instance. SQL> ASM instance started Total System Global Area 267227136 bytes Fixed Size 2227504 bytes Variable Size 239833808 bytes ASM Cache 25165824 bytes SQL> ASM instance shutdown SQL> Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production With the Automatic Storage Management option
Ok, basically we are able to startup our ASM instance. As we want our ASM instance managed by the cluster stack, we will need to create and register some resources.
Start by adding the listener for the ASM instance:
srvctl add listener -l listener_ASM -o $ORACLE_HOME -p 1526
… continue with adding ASM itself:
srvctl add asm -l listener_ASM -p ${INITORA}
… startup listener and ASM:
srvctl start listener -l listener_ASM srvctl start asm
… and check the resources ( we will use the alias again. you can also type “crsctl status resource -t” ):
crsstat
If you followed the steps, this is what you will see:
-------------------------------------------------------------------------------- NAME TARGET STATE SERVER STATE_DETAILS -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.LISTENER_ASM.lsnr ONLINE ONLINE oracleplayground ora.asm ONLINE ONLINE oracleplayground Started ora.ons OFFLINE OFFLINE oracleplayground -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- ora.cssd 1 ONLINE ONLINE oracleplayground ora.diskmon 1 OFFLINE OFFLINE ora.evmd 1 ONLINE ONLINE oracleplayground
Two new resources are showing up, the listener and the ASM instance.
Lets create a spfile for the ASM instance:
srvctl stop asm sqlplus /nolog <<EOF connect / as sysasm create spfile='${SPFILE}' from pfile='${INITORA}'; startup force; show parameter pfile shutdown abort; exit; EOF
You can ignore the “ORA-15110: no disk-groups mounted” message in the output. We will create disk-groups shortly.
As we registered ASM with a pfile in the cluster stack, we need to modify this:
srvctl modify asm -p ${SPFILE} srvctl start asm srvctl config asm crsctl status resource -t rm -f ${INITORA}
Fine, ASM is now managed by cluster stack. We do not need to worry about shutting down or starting up the instance when the server reboots anymore. And we even do not need to create startup or shutdown scripts for the ASM instance.
ASM without any disks and disk-group does not make much sense. Lets create some:
sqlplus /nolog <<EOF connect / as sysasm alter system set asm_diskstring='/dev/sd*' scope=both; create diskgroup DB112_DATA_DG external redundancy disk '/dev/sda1' name DB112_DATA_DG_VOL01 attribute 'au_size' = '4M' , 'compatible.asm' = '11.2' , 'compatible.rdbms' = '11.2' ; create diskgroup DB112_ARCH_DG external redundancy disk '/dev/sdb1' name DB112_ARCH_DG_VOL01 attribute 'au_size' = '4M' , 'compatible.asm' = '11.2' , 'compatible.rdbms' = '11.2' ; exit; EOF
Work done. You can check the cluster stack for the status of the resources:
crsstat
Notice the disk-groups showing up:
-------------------------------------------------------------------------------- NAME TARGET STATE SERVER STATE_DETAILS -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.DB112_ARCH_DG.dg ONLINE ONLINE oracleplayground ora.DB112_DATA_DG.dg ONLINE ONLINE oracleplayground ora.LISTENER_ASM.lsnr ONLINE ONLINE oracleplayground ora.asm ONLINE ONLINE oracleplayground Started ora.ons OFFLINE OFFLINE oracleplayground -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- ora.cssd 1 ONLINE ONLINE oracleplayground ora.diskmon 1 OFFLINE OFFLINE ora.evmd 1 ONLINE ONLINE oracleplayground
Although it is optional, let’s create some directories for holding the audit logs and diagnostic files. I personally prefer to have it outside the ORACLE_HOME:
su - mkdir -p /oradata/+ASM chown grid:oinstall /oradata/+ASM su - grid mkdir /oradata/+ASM/admin mkdir /oradata/+ASM/admin/adump sqlpa alter system set audit_file_dest='/oradata/+ASM/admin/adump' scope=spfile; alter system set diagnostic_dest='/oradata/+ASM/admin' scope=spfile; exit; srvctl stop asm -f srvctl start asm sqlpa show parameter audit_file_dest show parameter diagnostic_dest exit echo "ADR_BASE_LISTENER_ASM=/oradata/+ASM/admin" >> /opt/oracle/product/crs/11.2.0.3/network/admin/listener.ora srvctl stop listener -l listener_asm srvctl start listener -l listener_asm
What happened: We told the listener and the asm instance to place the traces and logs at the location we want to.
Finally, let’s create an alias for the sqlplus connections:
echo "alias sqlpa='sqlplus '\''/ as sysasm'\'''" >> ~/.bash_profile
If you re-login as grid user, you can now connect the easy way:
sqlpa
As usual ( on your workstation ), if you like, create a snapshot to save your work:
vboxmanage snapshot oracleplayground take "asm running"
For your convenience, here’s the complete script:
#!/bin/bash if [ $(id -un) != "grid" ]; then echo "you must run this as root" exit 0 fi INITORA=$ORACLE_HOME/dbs/init+ASM.ora SPFILE=$ORACLE_HOME/dbs/spfile+ASM.ora echo "instance_type='asm' memory_target=128m memory_max_target=128m " > ${INITORA} crsctl start resource ora.cssd sqlplus /nolog <<EOF connect / as sysasm startup force nomount pfile=${INITORA}; shutdown abort; exit; EOF srvctl add listener -l listener_ASM -o $ORACLE_HOME -p 1526 srvctl add asm -l listener_ASM -p ${INITORA} srvctl start listener -l listener_ASM srvctl start asm srvctl stop asm -f crsctl status resource -t sqlplus /nolog <<EOF connect / as sysasm create spfile='${SPFILE}' from pfile='${INITORA}'; startup force; show parameter pfile shutdown abort; exit; EOF srvctl modify asm -p ${SPFILE} srvctl start asm srvctl config asm crsctl status resource -t rm -f ${INITORA} sqlplus /nolog <<EOF connect / as sysasm alter system set asm_diskstring='/dev/sd*' scope=both; create diskgroup DB112_DATA_DG external redundancy disk '/dev/sda1' name DB112_DATA_DG_VOL01 attribute 'au_size' = '4M' , 'compatible.asm' = '11.2' , 'compatible.rdbms' = '11.2' ; create diskgroup DB112_ARCH_DG external redundancy disk '/dev/sdb1' name DB112_ARCH_DG_VOL01 attribute 'au_size' = '4M' , 'compatible.asm' = '11.2' , 'compatible.rdbms' = '11.2' ; exit; EOF