Archives For configuration

if you want to manage vips in the grid infrastructure which are not on the default network and you get this: “CRS-2534: Resource type ‘ora.cluster_vip_net2.type’ is not registered” don’t panic, it is easy to fix. basically you need create the “ora.cluster_vip_net2.type”-type before adding the vip with appvipcfg:

./srvctl add network -k 2 -S x.x.x.0/255.255.255.0/igb0
./crsctl start resource ora.net2.network
./crsctl add type ora.cluster_vip_net2.type -basetype ora.cluster_vip.type
./appvipcfg create -network=2 -ip=x.x.x.x -vipname=myvip1 -user=root
./crsctl start resource vip1 -n server1
./appvipcfg create -network=2 -ip=x.x.x.x -vipname=myvip2 -user=root
./crsctl start resource vip2 -n server2
./crsctl stat res –t
./crsctl modify res 'myvip1' -attr "HOSTING_MEMBERS=server1 server2"
./crsctl modify res 'myvip2' -attr "HOSTING_MEMBERS=server1 server2"

not sure, but I think this is a bug as appvipcfg should manage this.

the above is valid for 11.2.0.3 on Solaris SPARC 64bit

back from holidays and the first ouch on the first day. we tried to setup a clustered grid infrastructure on a solaris 10 t4-2 machine. business as usual, one might think. but when running the rootcrs.pl setup script asmca ( which in invoked from rootcrs.pl automatically ) reports:

...
main] [ 2012-11-02 12:58:59.282 MET ] [OracleHome.getVersion:1023] Current version from sqlplus: 11.2.0.3.0
[main] [ 2012-11-02 12:58:59.282 MET ] [UsmcaLogger.logInfo:143] Role SYSASM
[main] [ 2012-11-02 12:58:59.282 MET ] [UsmcaLogger.logInfo:143] OS Auth true
[main] [ 2012-11-02 12:59:21.347 MET ] [SQLEngine.done:2189] Done called
[main] [ 2012-11-02 12:59:21.349 MET ] [USMInstance.configureLocalASM:3033] ORA-04031: unable to allocate 32 bytes of shared memory ("shared pool","SELECT DECODE(null,'','Total...","SQLA","tmp")
...

hm. there is a support note about this behaviour ( 1416083.1 ) but there is no way to give asm parameters to rootcrs.pl or the crsconfig_params files. what we needed to do:
disable some CPUs ( so there are less than 64 available ), re-configure the cluster, adjust the asm memory parameters and then enable the cpus again. this seems to be the only workaround at the moment ( 11.2.0.4 and 12.1 should contain a fix for this ).

just noticed that there is a template script in the GI_HOME to enable RAC:

cat $GI_HOME/crs/config/relink_rac_on
#!/bin/sh
#
MAKE=/usr/bin/make
ECHO=/bin/echo
if [ $# -lt 1 ]; then
  $ECHO "Usage: relink_rac_on "
  exit 1
fi
ORACLE_HOME=$1
if [ $# -gt 1 ]; then
  MAKE=$2
else
  $ECHO "Make path has not been passed from command line."
fi
$ECHO "Checking for writable permission on $ORACLE_HOME/lib and $ORACLE_HOME/bin."
if [ -w $ORACLE_HOME/bin -a -w $ORACLE_HOME/lib ]; then
  $ECHO "running rac_on make target.."
  $MAKE -f $ORACLE_HOME/rdbms/lib/ins_rdbms.mk ORACLE_HOME=$ORACLE_HOME rac_on ioracle
  exit $?
else
  $ECHO "$ORACLE_HOME/lib and $ORACLE_HOME/bin directories are not writable, hence skipping relink operation."
fi
exit 0

this can be useful if you installed the ORACLE_HOME without enabling RAC and want to turn it on afterwards …

because various people come to this blog when they search for “acfs-9459: advm/acfs is not supported on this os version” I want to give some information on how you may tweak this ( if you really want to test ACFS/ADVM on an OS that is _not_ supported and there really is no need for this as oracle linux is free for testing ).

but before you really do this, keep in mind:

  • this is just for educational purposes
  • this is in no way supported by oracle
  • don’t do this on a production system
  • there is no guarantee that this works for you os

before executing the “roothas.pl” or “rootcrs.pl” scripts check the following file in your GI_HOME:

$GI_HOME/lib/osds_acfslib.pm

scroll down to where the various checks are performed. the lines should look similar to this:

# Testing for "oraclelinux_release", etc prevents us from dropping
# into the code below from other RH based distros like CentOS, for example.
  if ((defined($release)) &&                     # Redhat or OEL if defined
      (($release =~ /^redhat-release/) ||        # straight RH
       ($release =~ /^enterprise-release/) ||    # Oracle Enterprise Linux
       ($release =~ /^oraclelinux-release/)))    # Oracle Linux
  {
    my ($major, $minor, $patch, $vendor, $variation) = split /\./, $kver;
    my ($micro_number, $patch_level) = split /-/, $patch;    # e.g., 100 and 32
    if ($release =~ /AXS/)                 # Asianux Not supported
    {
      $vers = "ASIANUX";
      $supported = 0;
...

here you can tweak the code so that your os reports to be supported and you should be fine …

when it comes to the grid infrastructure ( single node or cluster configuration ) oracle needs to store and manage some configuration data about the state of the node(s) and its resources. each node in the configuration has its own container to store and manage data which is called the “oracle local registry (olr)”.

by using the ocrcheck utility one can check the current the status of the local registry:

ocrcheck -local
Status of Oracle Local Registry is as follows :
Version : 3
Total space (kbytes) : 262120
Used space (kbytes) : 2472
Available space (kbytes) : 259648
ID : 1322090758
Device/File Name : /opt/oracle/product/crs/11.2.0.3/cdata/localhost/oracleplayground.olr
Device/File integrity check succeeded
Local registry integrity check succeeded
Logical corruption check bypassed due to non-privileged user

beside some other information this tells that the local registry is located here:

/opt/oracle/product/crs/11.2.0.3/cdata/localhost/oracleplayground.olr

if you wonder about the “Logical corruption check bypassed due to non-privileged user” message, this is because the ocrcheck command was executed as the grid infrastructure software owner. re-executing the same command as root user will show “Logical corruption check succeeded”. why root? because some parts of the grid infrastructure run with root privileges ( remember the roothas.pl or rootcrs.pl setup script ).

as the local registry is essential for the cluster stack to start up there must be a way to do proper backups and restores of it in case it gets lost or corrupted. oracle creates an initial backup of the olr after the configuration of the grid infrastructure. from that point onwards it is up to you to create and manage the backups. you will find the initial backup of your olr under ORACLE_HOME/cdata/[HOSTNAME]/, in my case:

ls -al /opt/oracle/product/crs/11.2.0.3/cdata/oracleplayground/
total 5644
drwxr-x--- 2 grid oinstall 4096 Mar 23 09:18 .
drwxrwx--- 4 grid oinstall 4096 Mar 23 09:17 ..
-rw------- 1 grid oinstall 5746688 Mar 23 09:18 backup_20120323_091815.olr

first of all one should decide where to store the manual backups of the olr. the default location is fine if you do regular backups of your $ORACLE_HOME ( and it is strongly recommended to do so ). to list the current configuration use:

./ocrconfig -local -showbackup
oracleplayground 2012/03/23 09:18:15 /opt/oracle/product/crs/11.2.0.3/cdata/oracleplayground/backup_20120323_091815.olr

if you need to change the location for any reason, you can do so by using the ocrconfig command:

./ocrconfig -local -backuploc [BACKUP_DESTINATION]

doing backups of your olr is as easy as:

/ocrconfig -local -manualbackup
oracleplayground 2012/05/03 15:51:38 /opt/oracle/product/crs/11.2.0.3/cdata/oracleplayground/backup_20120503_155138.olr
oracleplayground 2012/03/23 09:18:15 /opt/oracle/product/crs/11.2.0.3/cdata/oracleplayground/backup_20120323_091815.olr

…which will also list the current backups of the olr. of course you need to ensure that these files are being backuped up to a safe location ( to a tape, for example ).

another way for doing backups and restores is to use the “import” and “export” parameters of the ocrconfig command:

./ocrconfig -local -export /tmp/olr.dmp
./ocrconfig -local -import /tmp/olr.dmp

as oracle recommends not to use the “export” and “import” procedures for backups and restores i wonder what these parameters are for.
perhaps you should do both kinds of backups to be sure that one of it really works :)

in case you need to restore a backup of the local registry make sure you stop the cluster stack before:

crsctl stop has
ocrconfig -local -restore [OLR_BACKUP_FILE]
ocrcheck -local
crsctl start has

as it should be clear now how to backup and restore the local registry lets take look at the contents. oracle provides another command for dumping the contents of the local ( or cluster ) registry:

./ocrdump -local -stdout

this will dump the contents of the local registry to the screen. take a look at the output and you will see why the registry is essential for the stack. if you registered a database and listener with the grid infrastructure you will see something like this:

[SYSTEM.OHASD.RESOURCES.ora!db112!db]
[SYSTEM.OHASD.RESOURCES.ora!db112!db.CONFIG]
ORATEXT : ACL=owner:grid:rwx,pgrp:asmdba:r-x,other::r--,group:oinstall:r-x,user:oracle:rwx~ACTION_FAILURE_TEMPLATE=~ACTION_SCRIPT=~ACTIVE_PLACEMENT=1~AGENT_FILENAME=%CRS_HOME%/bin/oraagent%CRS_EXE_SUFFIX%~AUTO_START=restore~BASE_TYPE=ora.cluster_resource.type~CARDINALITY=1~CHECK_INTERVAL=1~CHECK_TIMEOUT=30~CLUSTER_DATABASE=false~DATABASE_TYPE=SINGLE~DB_UNIQUE_NAME=DB112~DEFAULT_TEMPLATE=PROPERTY(RESOURCE_CLASS=database) PROPERTY(DB_UNIQUE_NAME= CONCAT(PARSE(%NAME
%, ., 2), %USR_ORA_DOMAIN%, .)) ELEMENT(INSTANCE_NAME= %GEN_USR_ORA_INST_NAME%) ELEMENT(DATABASE_TYPE= %DATABASE_TYPE%)~DEGREE=1~DESCRIPTION=Oracle Database resource~ENABLED=1~FAILOVER_DELAY=0~FAILURE_INTERVAL=60~FAILURE_THRESHOLD=1~GEN_AUDIT_FILE_DEST=/oradata/DB112/admin/adump~GEN_START_OPTIONS=open~GEN_USR_ORA_INST_NAME=DB112~HOSTING_MEMBERS=~INSTANCE_FAILOVER=1~LOAD=1~LOGGING_LEVEL=1~MANAGEMENT_POLICY=AUTOMATIC~NAME=ora.db112.db~NLS_LANG=~NOT_RESTARTING_TEMPLATE=~OFFLINE_CHECK_INTERVAL=0~ONLINE_RELOCATI
ON_TIMEOUT=0~ORACLE_HOME=/opt/oracle/product/base/11.2.0.3~ORACLE_HOME_OLD=~PLACEMENT=balanced~PROFILE_CHANGE_TEMPLATE=~RESTART_ATTEMPTS=2~ROLE=PRIMARY~SCRIPT_TIMEOUT=60~SERVER_POOLS=~SPFILE=/opt/oracle/product/base/11.2.0.3/dbs/spfileDB112.ora~START_DEPENDENCIES=weak(type:ora.listener.type,uniform:ora.ons) hard(ora.DB112_DATA_DG.dg,ora.DB112_ARCH_DG.dg) pullup(ora.DB112_DATA_DG.dg,ora.DB112_ARCH_DG.dg)~START_TIMEOUT=600~STATE_CHANGE_TEMPLATE=~STOP_DEPENDENCIES=hard(intermediate:ora.asm,shutdown:ora.DB112_D
ATA_DG.dg,shutdown:ora.DB112_ARCH_DG.dg)~STOP_TIMEOUT=600~TYPE=ora.database.type~TYPE_ACL=owner:oracle:rwx,pgrp:oinstall:rwx,other::r--~TYPE_NAME=ora.database.type~TYPE_VERSION=3.2~UPTIME_THRESHOLD=1h~USR_ORA_DB_NAME=~USR_ORA_DOMAIN=~USR_ORA_ENV=~USR_ORA_FLAGS=~USR_ORA_INST_NAME=DB112~USR_ORA_OPEN_MODE=open~USR_ORA_OPI=false~USR_ORA_STOP_MODE=immediate~VERSION=11.2.0.3.0~

everything the cluster stack needs to know about the registered resource is recorded here. this includes all the dependencies of the resource as well as the different kinds of parameters, e.g. :

SPFILE=/opt/oracle/product/base/11.2.0.3/dbs/spfileDB112.ora

… which tells where to find the spfile of the registered database, or:

hard(ora.DB112_DATA_DG.dg,ora.DB112_ARCH_DG.dg)

… which tells that the diskgroups DB112_DATA_DG and DB112_ARCH_DG must be available before the database can start up. this is a hard dependency, so the startup of the database will stop if the diskgroups are not available.

you can get a more readable format by dumping the contents to a xml file:

./ocrdump -local /tmp/olr.xml -xml

as the backups of the olr are the same format as the current olr, ocrdump works the same way if you want to dump the backups. this can help when searching for changes in the olr:

./ocrdump -local /tmp/olr_backup.xml -backupfile /opt/oracle/product/crs/11.2.0.3/cdata/oracleplayground/backup_20120503_155138.olr -xml

you may have observed, that all the commands ( ocrconfig, orcdump, ocrcheck ) are used to manage the local registry as well as the cluster registry in case you have a clustered setup ( RAC or RAC one node, for example ). so, no matter if you are on a single node or cluster configuration: make sure your backup and restore strategy includes the oracle local and cluster registry.

Before I start with the database creation, there remain some steps to be performed. Because I will place the files inside the ASM instance there is no need to create separate filesystems for datafiles, redologs, control-files. But still I want a custom location for the audit logs, pfile and diagnostic destination. Let’s prepare this:

su -
mkdir /oradata/DB112
chown oracle:dba /oradata/DB112
su - oracle
mkdir -p /oradata/DB112/admin/pfile
mkdir /oradata/DB112/admin/adump

The easiest, and probably the best way for most environments is to go with the defaults. Let’s start by creating a minimal pfile…

su - oracle
echo "*.audit_file_dest='/oradata/DB112/admin/adump'
*.control_files='+DB112_DATA_DG','+DB112_ARCH_DG'
*.db_block_size=8192
*.db_name='DB112'
*.instance_name='DB112'
*.sga_max_size=256M
*.sga_target=256M
*.undo_tablespace='undotbs1'
*.diagnostic_dest=/oradata/DB112/admin" > /oradata/DB112/admin/pfile/initDB112.ora

… check that we can initially start the instance…

sqlplus / as sysdba
startup force nomount pfile=/oradata/DB112/admin/pfile/initDB112.ora

… create the server parameter file:

create spfile='/oradata/DB112/admin/pfile/spfileDB112.ora' from pfile='/oradata/DB112/admin/pfile/initDB112.ora';

… create a link to the server parameter file ( for oracle to find the file )

!ln -s /oradata/DB112/admin/pfile/spfileDB112.ora $ORACLE_HOME/dbs/spfileDB112.ora

… restart without specifying a pfile to see if the spfile gets used:

startup force nomount
show parameter pfile;

… and create the database

create database DB112
logfile
group 1 ('+DB112_DATA_DG','+DB112_ARCH_DG') size 16M reuse,
group 2 ('+DB112_DATA_DG','+DB112_ARCH_DG') size 16M reuse,
group 3 ('+DB112_DATA_DG','+DB112_ARCH_DG') size 16M reuse
character set utf8
national character set AL16UTF16
datafile '+DB112_DATA_DG' size 512M reuse autoextend on next 8M maxsize 1024M extent management local
default temporary tablespace TEMP tempfile '+DB112_DATA_DG' size 64M reuse autoextend on next 8M maxsize 1024M
sysaux datafile '+DB112_DATA_DG' size 512M reuse autoextend on next 8M maxsize 1024M
default tablespace USERS datafile '+DB112_DATA_DG' size 4m reuse autoextend on next 8m maxsize 16M
undo tablespace UNDOTBS1 datafile '+DB112_DATA_DG' size 64M reuse autoextend on next 8M maxsize 1024M;

… create the dictionary:

@?/rdbms/admin/catalog.sql

… and the plsql stuff:

@?/rdbms/admin/catproc.sql

… create the help tables for sqlplus and populate it:

@?/sqlplus/admin/help/hlpbld.sql helpus.sql
shutdown immediate;
exit;

Done. Database created. What else ? Right, register the database with the Cluster Stack and create a listener:

srvctl add database -d DB112 -o $ORACLE_HOME -p $ORACLE_HOME/dbs/spfileDB112.ora
srvctl add listener -l listener_DB112 -p 1521 -o $ORACLE_HOME
touch chmod 770 /opt/oracle/product/base/11.2.0.3/network/admin/listener.ora
chmod 770 /opt/oracle/product/base/11.2.0.3/
chmod 770 /opt/oracle/product/base/11.2.0.3/dbs
chmod 770 /opt/oracle/product/base/11.2.0.3/network/
chmod 770 /opt/oracle/product/base/11.2.0.3/network/admin
chmod 770 /opt/oracle/product/base/11.2.0.3/network/admin/listener.ora
srvctl start listener -l listener_DB112
srvctl start database -d DB112

Please notice: The chmods are dirty tricks to avoid CRS-5010 issues. The reason for the CRS-5010 messages is that the cluster stack runs under a different user ( grid in this case ) and is not able to update the files. Maybe there is a fix in the future.

Let’s check the resources and do a reboot to check if all comes up again:

su - grid
crsctl status resource -t
su -
reboot
su - grid
crsctl status resource -t

You should see that the resources are up and running, something like:

+ASM@oracleplayground.fun grid:/home/grid $ crsctl status resource -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DB112_ARCH_DG.dg
ONLINE ONLINE oracleplayground
ora.DB112_DATA_DG.dg
ONLINE ONLINE oracleplayground
ora.LISTENER_ASM.lsnr
ONLINE ONLINE oracleplayground
ora.LISTENER_DB112.lsnr
ONLINE ONLINE oracleplayground
ora.asm
ONLINE ONLINE oracleplayground Started
ora.ons
OFFLINE OFFLINE oracleplayground
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.cssd
1 ONLINE ONLINE oracleplayground
ora.db112.db
1 ONLINE ONLINE oracleplayground Open
ora.diskmon
1 OFFLINE OFFLINE
ora.evmd
1 ONLINE ONLINE oracleplayground

For your convenience, here the complete database creation script:

echo "*.audit_file_dest='/oradata/DB112/admin/adump'
*.control_files='+DB112_DATA_DG','+DB112_ARCH_DG'
*.db_block_size=8192
*.db_name='DB112'
*.instance_name='DB112'
*.sga_max_size=256M
*.sga_target=256M
*.undo_tablespace='undotbs1'
*.diagnostic_dest=/oradata/DB112/admin" > /oradata/DB112/admin/pfile/initDB112.ora
sqlplus / as sysdba
startup force nomount pfile=/oradata/DB112/admin/pfile/initDB112.ora
create spfile='/oradata/DB112/admin/pfile/spfileDB112.ora' from pfile='/oradata/DB112/admin/pfile/initDB112.ora';
!ln -s /oradata/DB112/admin/pfile/spfileDB112.ora $ORACLE_HOME/dbs/spfileDB112.ora
startup force nomount
show parameter pfile;
create database DB112
logfile
group 1 ('+DB112_DATA_DG','+DB112_ARCH_DG') size 16M reuse,
group 2 ('+DB112_DATA_DG','+DB112_ARCH_DG') size 16M reuse,
group 3 ('+DB112_DATA_DG','+DB112_ARCH_DG') size 16M reuse
character set utf8
national character set AL16UTF16
datafile '+DB112_DATA_DG' size 512M reuse autoextend on next 8M maxsize 1024M extent management local
default temporary tablespace TEMP tempfile '+DB112_DATA_DG' size 64M reuse autoextend on next 8M maxsize 1024M
sysaux datafile '+DB112_DATA_DG' size 512M reuse autoextend on next 8M maxsize 1024M
default tablespace USERS datafile '+DB112_DATA_DG' size 4m reuse autoextend on next 8m maxsize 16M
undo tablespace UNDOTBS1 datafile '+DB112_DATA_DG' size 64M reuse autoextend on next 8M maxsize 1024M;
@?/rdbms/admin/catalog.sql
@?/rdbms/admin/catproc.sql
@?/sqlplus/admin/help/hlpbld.sql helpus.sql
shutdown immediate;
exit;
srvctl add database -d DB112 -o $ORACLE_HOME -p $ORACLE_HOME/dbs/spfileDB112.ora
srvctl add listener -l listener_DB112 -p 1521 -o $ORACLE_HOME
touch chmod 770 /opt/oracle/product/base/11.2.0.3/network/admin/listener.ora
chmod 770 /opt/oracle/product/base/11.2.0.3/
chmod 770 /opt/oracle/product/base/11.2.0.3/dbs
chmod 770 /opt/oracle/product/base/11.2.0.3/network/
chmod 770 /opt/oracle/product/base/11.2.0.3/network/admin
chmod 770 /opt/oracle/product/base/11.2.0.3/network/admin/listener.ora
srvctl start listener -l listener_DB112
srvctl start database -d DB112

OS ready, Grid Infrastructure ready, patched to the latest release. Time to start with the “real” things…

Thanks to the virtualization technology ( Virtual Box ) in this case, it is easy to add storage to you machine. If running, shut down your Virtual Machine and add two disks to it.

From your workstation, execute:

vboxmanage storagectl oracleplayground --name "SCSI Controller" --add scsi --controller LSILogic
vboxmanage createhd --filename ~/VirtualBox\ VMs/oracleplayground/oracleplayground_asm1.vdi --size=10240 --variant Fixed
vboxmanage createhd --filename ~/VirtualBox\ VMs/oracleplayground/oracleplayground_asm2.vdi --size=10240 --variant Fixed
vboxmanage storageattach oracleplayground --storagectl "SCSI Controller" --port 1 --device 0 --type hdd --medium ~/VirtualBox\ VMs/oracleplayground/oracleplayground_asm1.vdi
vboxmanage storageattach oracleplayground --storagectl "SCSI Controller" --port 2 --device 0 --type hdd --medium ~/VirtualBox\ VMs/oracleplayground/oracleplayground_asm2.vdi

What happened: We created a new SCSI Controller for our Virtual Machine, created two 10gb hard disks and attached them to the Controller.

Startup your Virtual Machine and check the devices:

su -
ls -la /dev/sd*
brw-r----- 1 root disk 8, 0 Mar 23 13:02 /dev/sda
brw-r----- 1 root disk 8, 16 Mar 23 13:02 /dev/sdb

These are our newly created disks. What we will need for ASM is one partition for each disk, so lets create the partitions:

su -
fdisk /dev/sda
fdisk /dev/sdb

The commands for creating the partitions are ( in this sequence ): p n p 1 enter enter w quit

If we check the devices again, we now can see the newly created partitions:

ls -la /dev/sd*
brw-r----- 1 root disk 8, 0 Mar 23 13:06 /dev/sda
brw-r----- 1 root disk 8, 1 Mar 23 13:06 /dev/sda1
brw-r----- 1 root disk 8, 16 Mar 23 13:09 /dev/sdb
brw-r----- 1 root disk 8, 17 Mar 23 13:09 /dev/sdb1

To be usable to asm, the disks must belong to the grid user. We could change the permissions now with chmod, but because we want this to be persistent across reboots we create udev rules for it:

su -
echo "KERNEL==\"sd[a-b][1-9]*\", OWNER=\"grid\", GROUP=\"asmadmin\", MODE=\"0660\"" > /etc/udev/rules.d/61-asm.rules
reboot

Once the Virtual Machine rebooted, check the devices again and you will see that the owner,group and permissions changed:

su -
ls -la /dev/sd*
brw-r----- 1 root disk 8, 0 Mar 23 13:17 /dev/sda
brw-rw---- 1 grid asmadmin 8, 1 Mar 23 13:17 /dev/sda1
brw-r----- 1 root disk 8, 16 Mar 23 13:17 /dev/sdb
brw-rw---- 1 grid asmadmin 8, 17 Mar 23 13:17 /dev/sdb1

Now we are prepared for creating our ASM instance. Because we are lazy, lets define two variables for the init.ora and the spfile:

su - grid
INITORA=$ORACLE_HOME/dbs/init+ASM.ora
SPFILE=$ORACLE_HOME/dbs/spfile+ASM.ora

Create a minimal set of parameters:

echo "instance_type='asm'
memory_target=128m
memory_max_target=128m
" > ${INITORA}

… and start the cssd resource ( which is the cluster synchronization service, required by ASM ):

crsctl start resource ora.cssd

You should see the following output:

CRS-2672: Attempting to start 'ora.cssd' on 'oracleplayground'
CRS-2672: Attempting to start 'ora.diskmon' on 'oracleplayground'
CRS-2676: Start of 'ora.diskmon' on 'oracleplayground' succeeded
CRS-2676: Start of 'ora.cssd' on 'oracleplayground' succeeded

Let’s try if we can startup and shutdown our ASM instance:

sqlplus /nolog <<EOF
connect / as sysasm
startup force nomount pfile=${INITORA};
shutdown abort;
exit;
EOF

The output should look like:

SQL*Plus: Release 11.2.0.3.0 Production on Fri Mar 23 13:28:29 2012
Copyright (c) 1982, 2011, Oracle. All rights reserved.
SQL> Connected to an idle instance.
SQL> ASM instance started
Total System Global Area 267227136 bytes
Fixed Size 2227504 bytes
Variable Size 239833808 bytes
ASM Cache 25165824 bytes
SQL> ASM instance shutdown
SQL> Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Automatic Storage Management option

Ok, basically we are able to startup our ASM instance. As we want our ASM instance managed by the cluster stack, we will need to create and register some resources.

Start by adding the listener for the ASM instance:

srvctl add listener -l listener_ASM -o $ORACLE_HOME -p 1526

… continue with adding ASM itself:

srvctl add asm -l listener_ASM -p ${INITORA}

… startup listener and ASM:

srvctl start listener -l listener_ASM
srvctl start asm

… and check the resources ( we will use the alias again. you can also type “crsctl status resource -t” ):

crsstat

If you followed the steps, this is what you will see:

--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER_ASM.lsnr
ONLINE ONLINE oracleplayground
ora.asm
ONLINE ONLINE oracleplayground Started
ora.ons
OFFLINE OFFLINE oracleplayground
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.cssd
1 ONLINE ONLINE oracleplayground
ora.diskmon
1 OFFLINE OFFLINE
ora.evmd
1 ONLINE ONLINE oracleplayground

Two new resources are showing up, the listener and the ASM instance.

Lets create a spfile for the ASM instance:

srvctl stop asm
sqlplus /nolog <<EOF
connect / as sysasm
create spfile='${SPFILE}' from pfile='${INITORA}';
startup force;
show parameter pfile
shutdown abort;
exit;
EOF

You can ignore the “ORA-15110: no disk-groups mounted” message in the output. We will create disk-groups shortly.

As we registered ASM with a pfile in the cluster stack, we need to modify this:

srvctl modify asm -p ${SPFILE}
srvctl start asm
srvctl config asm
crsctl status resource -t
rm -f ${INITORA}

Fine, ASM is now managed by cluster stack. We do not need to worry about shutting down or starting up the instance when the server reboots anymore. And we even do not need to create startup or shutdown scripts for the ASM instance.

ASM without any disks and disk-group does not make much sense. Lets create some:

sqlplus /nolog <<EOF
connect / as sysasm
alter system set asm_diskstring='/dev/sd*' scope=both;
create diskgroup DB112_DATA_DG external redundancy
disk '/dev/sda1' name DB112_DATA_DG_VOL01
attribute
'au_size' = '4M'
, 'compatible.asm' = '11.2'
, 'compatible.rdbms' = '11.2'
;
create diskgroup DB112_ARCH_DG external redundancy
disk '/dev/sdb1' name DB112_ARCH_DG_VOL01
attribute
'au_size' = '4M'
, 'compatible.asm' = '11.2'
, 'compatible.rdbms' = '11.2'
;
exit;
EOF

Work done. You can check the cluster stack for the status of the resources:

crsstat

Notice the disk-groups showing up:

--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DB112_ARCH_DG.dg
ONLINE ONLINE oracleplayground
ora.DB112_DATA_DG.dg
ONLINE ONLINE oracleplayground
ora.LISTENER_ASM.lsnr
ONLINE ONLINE oracleplayground
ora.asm
ONLINE ONLINE oracleplayground Started
ora.ons
OFFLINE OFFLINE oracleplayground
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.cssd
1 ONLINE ONLINE oracleplayground
ora.diskmon
1 OFFLINE OFFLINE
ora.evmd
1 ONLINE ONLINE oracleplayground

Although it is optional, let’s create some directories for holding the audit logs and diagnostic files. I personally prefer to have it outside the ORACLE_HOME:

su -
mkdir -p /oradata/+ASM
chown grid:oinstall /oradata/+ASM
su - grid
mkdir /oradata/+ASM/admin
mkdir /oradata/+ASM/admin/adump
sqlpa
alter system set audit_file_dest='/oradata/+ASM/admin/adump' scope=spfile;
alter system set diagnostic_dest='/oradata/+ASM/admin' scope=spfile;
exit;
srvctl stop asm -f
srvctl start asm
sqlpa
show parameter audit_file_dest
show parameter diagnostic_dest
exit
echo "ADR_BASE_LISTENER_ASM=/oradata/+ASM/admin" >> /opt/oracle/product/crs/11.2.0.3/network/admin/listener.ora
srvctl stop listener -l listener_asm
srvctl start listener -l listener_asm

What happened: We told the listener and the asm instance to place the traces and logs at the location we want to.

Finally, let’s create an alias for the sqlplus connections:

echo "alias sqlpa='sqlplus '\''/ as sysasm'\'''" >> ~/.bash_profile

If you re-login as grid user, you can now connect the easy way:

sqlpa

As usual ( on your workstation ), if you like, create a snapshot to save your work:

vboxmanage snapshot oracleplayground take "asm running"

For your convenience, here’s the complete script:

#!/bin/bash
if [ $(id -un) != "grid" ]; then
echo "you must run this as root"
exit 0
fi
INITORA=$ORACLE_HOME/dbs/init+ASM.ora
SPFILE=$ORACLE_HOME/dbs/spfile+ASM.ora
echo "instance_type='asm'
memory_target=128m
memory_max_target=128m
" > ${INITORA}
crsctl start resource ora.cssd
sqlplus /nolog <<EOF
connect / as sysasm
startup force nomount pfile=${INITORA};
shutdown abort;
exit;
EOF
srvctl add listener -l listener_ASM -o $ORACLE_HOME -p 1526
srvctl add asm -l listener_ASM -p ${INITORA}
srvctl start listener -l listener_ASM
srvctl start asm
srvctl stop asm -f
crsctl status resource -t
sqlplus /nolog <<EOF
connect / as sysasm
create spfile='${SPFILE}' from pfile='${INITORA}';
startup force;
show parameter pfile
shutdown abort;
exit;
EOF
srvctl modify asm -p ${SPFILE}
srvctl start asm
srvctl config asm
crsctl status resource -t
rm -f ${INITORA}
sqlplus /nolog <<EOF
connect / as sysasm
alter system set asm_diskstring='/dev/sd*' scope=both;
create diskgroup DB112_DATA_DG external redundancy
disk '/dev/sda1' name DB112_DATA_DG_VOL01
attribute
'au_size' = '4M'
, 'compatible.asm' = '11.2'
, 'compatible.rdbms' = '11.2'
;
create diskgroup DB112_ARCH_DG external redundancy
disk '/dev/sdb1' name DB112_ARCH_DG_VOL01
attribute
'au_size' = '4M'
, 'compatible.asm' = '11.2'
, 'compatible.rdbms' = '11.2'
;
exit;
EOF