Lets continue this series of posts:
Create your own Oracle Staging Environment, Part 1: The Staging Server
Create your own Oracle Staging Environment, Part 2: Kickstart File(s) and Profiles
Create your own Oracle Staging Environment, Part 3: Setup the lab environment
In Part 3 we had two (or more) VMs up and running with the following specs:
- eth0 configured as the public network
- eth1 configured as the private interconnect
- three shared storage devices for the ASM disks
- oracle users and groups
- a sample source file deployed on all Vms
this might be sufficiant for starting a lab if you want to demonstrate the installation of a grid infratructure. but we can even go further and automate the setup of the GI. what else would be required?
- the source file for the grid infrastructure (stored in /var/www/html/sources). in this case the 11.2.0.4 files downloaded from mos
- kernel parameters
- partitions on the disks
- limits for the oracle users
- setting the io scheduler
- the cluster configuration file
- the gi software installation
- password-less ssh connectivity for the grid user
here are the updated scripts (as pdfs to reduce the length of this post):
- create_gi_rac_lab.sh
- finish_gi_setup.sh (this needs to placed in the /var/www/html directory of the cobbler server)
- oracledatabaseserver.ks
place these on the cobbler server in same locations as in the previous posts, then:
./create_gi_rac_lab.sh mylab 2
fetch one of the vm setup scripts:
ls -la /var/www/html/mylab/setup*.sh -rw-r--r--. 1 root root 3821 Jan 11 09:32 /var/www/html/mylab/setup_p_1.sh -rw-r--r--. 1 root root 3821 Jan 11 09:32 /var/www/html/mylab/setup_p_2.sh
… and fire up the VMs on your workstation.
give each vm some minutes before starting the next one. the reason is that the last vm will do the configuration of the GI and therefore the other vms should be up and running when the configuration starts. once the last one completed you have the GI up and running on all the nodes:
[root@mylabp1vm1 ~]# /opt/oracle/product/crs/11.2.0.4/bin/crsctl stat res -t -------------------------------------------------------------------------------- NAME TARGET STATE SERVER STATE_DETAILS -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.CRS_DG.dg ONLINE ONLINE mylabp1vm1 ONLINE ONLINE mylabp1vm2 ora.asm ONLINE ONLINE mylabp1vm1 Started ONLINE ONLINE mylabp1vm2 Started ora.gsd OFFLINE OFFLINE mylabp1vm1 OFFLINE OFFLINE mylabp1vm2 ora.net1.network ONLINE ONLINE mylabp1vm1 ONLINE ONLINE mylabp1vm2 ora.ons ONLINE ONLINE mylabp1vm1 ONLINE ONLINE mylabp1vm2 -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- ora.LISTENER_SCAN1.lsnr 1 ONLINE ONLINE mylabp1vm2 ora.cvu 1 ONLINE ONLINE mylabp1vm1 ora.mylabp1vm1.vip 1 ONLINE ONLINE mylabp1vm1 ora.mylabp1vm2.vip 1 ONLINE ONLINE mylabp1vm2 ora.oc4j 1 ONLINE ONLINE mylabp1vm1 ora.scan1.vip 1 ONLINE ONLINE mylabp1vm2
there still is much room for improvement, but for now it at least works … takes around 20 minutes on my workstation until both nodes are up and configured.