So, lets take all these little pieces and put them together for creating a complete lab environment.
The use case is that you want to train some people on a clustered grid infrastructure and asm including the software installation and initial setup. For this each participant needs at least two interconnected lab machines, the oracle software, the oracle users etc.
The test VM we created in the last post already has the oracle users and groups in place as well as the required operating system packages. Additionally we prepared downloading source files from the cobbler server. A perfect starting point.
What we need now is a way to create two or more VMs per participant and and easy deployment method. here we go…
First we slightly need to adjust the kickstart file so that the shared storage devices will not automatically get partitioned during the setup (/var/lib/cobbler/kickstarts/oracledatabaseserver.ks):
On the cobbler server in /var/www/html create this script (create_lab.sh):
Update 2015-JAN-05: Attached the script as pdf as some parts of the script are not correctly displayed: create_lab.pdf
this is a very basic script, no error handling, no deep checks, no documentation but sufficiant to demonstrate the idea. when you call the script pass at least two parameters:
the name of the lab you want to create
the amount of participants you expect for the lab
optionally: the number of VMs that should be created per participant. the default is 2
exexute it:
./create_lab.sh MY_LAB 2
Creating ISOs and VM Setup Script for Participant 1
creating VM 1, name MY_LAB_P_1_VM_1
creating VM 2, name MY_LAB_P_1_VM_2
Creating ISOs and VM Setup Script for Participant 2
creating VM 1, name MY_LAB_P_2_VM_1
creating VM 2, name MY_LAB_P_2_VM_2
MY_LAB_P_1_VM_1
MY_LAB_P_1_VM_2
MY_LAB_P_2_VM_1
MY_LAB_P_2_VM_2
What has happened? Basically this:
ls -la /var/www/html/MY_LAB/
total 158672
drwxr-xr-x. 2 root root 4096 Dec 16 14:13 .
drwxr-xr-x. 7 root root 4096 Dec 16 14:13 ..
-rw-r--r--. 1 root root 40613888 Dec 16 14:13 MY_LAB_P_1_VM_1.iso
-rw-r--r--. 1 root root 40613888 Dec 16 14:13 MY_LAB_P_1_VM_2.iso
-rw-r--r--. 1 root root 40613888 Dec 16 14:13 MY_LAB_P_2_VM_1.iso
-rw-r--r--. 1 root root 40613888 Dec 16 14:13 MY_LAB_P_2_VM_2.iso
-rw-r--r--. 1 root root 3883 Dec 16 14:13 setup_p_1.sh
-rw-r--r--. 1 root root 3883 Dec 16 14:13 setup_p_2.sh
You got two boot isos per participant and one setup script for each participant which will setup the 2 virtualbox vms on a workstation. because everything is placed in a directory which is accessible via http you could just give the link at the beginning of the lab and ask each participant to pickup one of the setup scripts (http://192.168.56.101/MY_LAB in this setup):
Ask them to execute the script on their workstations and the vms will get setup. Once they are started up everything will get installed automatically (choose the VM Name in the initial bootloader and eth0 as the network interface to be configured).
For a short demo I took the 2 VMs generated for participant 1, fired them up and waited for the installation to complete:
Both have the network interfaces configured:
Both have the shared storage devices available (sdb,sdc,sdd):
Both have the oracle users and groups:
And both have opatch available as an example source:
Conclusion: Although you’ll need some time to setup all this stuff once you have it in place you are extremly fexible. If there is a requirement for a different setup just create a new cobbler profile, maybe create a new kickstart file and you’re basically done. The create_lab.sh script needs much more attention, especially if you want to additionally create the vm setup stuff for windows, but this should not be a big deal as the vboxmanage command should work the same way.
Another goody: If some of the participants have plenty of memory and disk on their workstations just generate 3 or more boot isos per participant. Those with lesser resources just can comment or delete the lines for the third and fourth VMs while the others can go with the full set of VMs.
Let’s review our test system on the cobbler server:
cobbler system report list
Name : test
TFTP Boot Files : {}
Comment :
Enable gPXE? : 0
Fetchable Files : {}
Gateway : 192.168.56.1
Hostname : test.lab.ch
Image :
IPv6 Autoconfiguration : False
IPv6 Default Device :
Kernel Options : {}
Kernel Options (Post Install) : {}
Kickstart : <>
Kickstart Metadata : {}
LDAP Enabled : False
LDAP Management Type : authconfig
Management Classes : <>
Management Parameters : <>
Monit Enabled : False
Name Servers : []
Name Servers Search Path : []
Netboot Enabled : True
Owners : ['admin']
Power Management Address :
Power Management ID :
Power Management Password :
Power Management Type : ipmitool
Power Management Username :
Profile : oel66-x86_64
Proxy : <>
Red Hat Management Key : <>
Red Hat Management Server : <>
Repos Enabled : False
Server Override : <>
Status : production
Template Files : {}
Virt Auto Boot : <>
Virt CPUs : <>
Virt Disk Driver Type : <>
Virt File Size(GB) : <>
Virt Path : <>
Virt PXE Boot : 0
Virt RAM (MB) : <>
Virt Type : <>
Interface ===== : eth0
Bonding Opts :
Bridge Opts :
CNAMES : []
DHCP Tag :
DNS Name : test.lab.ch
Per-Interface Gateway :
Master Interface :
Interface Type :
IP Address : 192.168.56.201
IPv6 Address :
IPv6 Default Gateway :
IPv6 MTU :
IPv6 Prefix :
IPv6 Secondaries : []
IPv6 Static Routes : []
MAC Address :
Management Interface : False
MTU :
Subnet Mask : 255.255.255.0
Static : True
Static Routes : []
Virt Bridge :
What we want to modify now is the kickstart parameter:
Kickstart : <>
As profiles are the core unit for provisioning and kickstart files can be accociated with profiles let’s check our current profiles:
cobbler profile list
oel66-x86_64
This one got created when we imported our distribution. As this name does not tell that much about the purpose of the profile lets create a new profile with a more descriptive name:
cobbler profile add --name=OracleDatabaseServer --distro=oel66-x86_64
cobbler profile list
OracleDatabaseServer
oel66-x86_64
Let’s see what the newly created profile looks like:
cobbler profile report --name=OracleDatabaseServer
Name : OracleDatabaseServer
TFTP Boot Files : {}
Comment :
DHCP Tag : default
Distribution : oel66-x86_64
Enable gPXE? : 0
Enable PXE Menu? : 1
Fetchable Files : {}
Kernel Options : {}
Kernel Options (Post Install) : {}
Kickstart : /var/lib/cobbler/kickstarts/default.ks
Kickstart Metadata : {}
Management Classes : []
Management Parameters : <>
Name Servers : []
Name Servers Search Path : []
Owners : ['admin']
Parent Profile :
Proxy :
Red Hat Management Key : <>
Red Hat Management Server : <>
Repos : []
Server Override : <>
Template Files : {}
Virt Auto Boot : 1
Virt Bridge : xenbr0
Virt CPUs : 1
Virt Disk Driver Type : raw
Virt File Size(GB) : 5
Virt Path :
Virt RAM (MB) : 512
Virt Type : xenpv
As we did not specify anything for the kickstart file when we created the profile, the default was applied:
Time to create our first kickstart file and assign it to our profile. The kickstart file below is pretty much default, except:
the oracle recommended OS groups are created
the oracle recommended OS users are created
the oracle required OS packages are installed
We could do much more here, e.g. creating a custom filesystem layout, creating the limits for the oracle users and so on. For the scope of this post, the below shall be sufficiant:
# save to /var/lib/cobbler/kickstarts/oracledatabaseserver.ks
auth --useshadow --enablemd5
# System bootloader configuration
bootloader --location=mbr --driveorder=sda --append="audit=1"
# Partition clearing information
clearpart --all --initlabel
# Use text mode install
text
# Firewall configuration
firewall --disabled
# Run the Setup Agent on first boot
firstboot --disable
# System keyboard
keyboard sg-latin1
# System language
lang en_US.UTF-8
# Use network installation
url --url=$tree
# If any cobbler repo definitions were referenced in the kickstart profile, include them here.
$yum_repo_stanza
# Network information
$SNIPPET('network_config')
# Reboot after installation
reboot
#Root password
rootpw --iscrypted $default_password_crypted
# SELinux configuration
selinux --disabled
# Do not configure the X Window System
skipx
# System timezone
timezone Europe/Zurich
# Install OS instead of upgrade
install
# Clear the Master Boot Record
zerombr
# Allow anaconda to partition the system as needed
autopart
%pre
$SNIPPET('log_ks_pre')
$SNIPPET('kickstart_start')
$SNIPPET('pre_install_network_config')
# Enable installation monitoring
$SNIPPET('pre_anamon')
%end
%packages --excludedocs --nobase
## $SNIPPET('func_install_if_enabled')
kernel
yum
openssh-server
openssh-clients
audit
logrotate
tmpwatch
vixie-cron
crontabs
ksh
ntp
perl
bind-utils
sudo
which
sendmail
wget
redhat-lsb
rsync
authconfig
lsof
unzip
logwatch
libacl
nfs-utils
dhclient
# oracle prereqs begin
gcc
binutils
compat-libcap1
compat-libstdc++-33
glibc
glibc-devel
libX11
libXau
libXext
libXi
libXtst
libaio
libaio-devel
libstdc++
libstdc++-devel
libxcb
make
sysstat
# oracle prereqs end
-firstboot
-tftp-server
-system-config-soundcard
-squashfs-tools
-device-mapper-multipath
-aspell-en
-aspell
-rdate
-dhcpv6-client
-NetworkManager
-rsh
-sysreport
-irda-utils
-rdist
-anacron
-bluez-utils
-talk
-system-config-lvm
-wireless-tools
-setroubleshoot-server
-setroubleshoot-plugins
-setroubleshoot
-ppp
-GConf2
-dhcpv6-client
-iptables-ipv6
-libselinux-python
-setools
-selinux-policy
-libselinux-utils
-chkfontpath
-urw-fonts
-xorg-x11-xfs
-policycoreutils
-selinux-policy-targeted
-ypbind
-yp-tools
-smartmontools
-pcsc-lite
-trousers
-oddjob
-yum-updatesd
-readahead
-pcsc-lite
-gpm
-at
-cpuspeed
-conman
-system-config-securitylevel-tui
-tcsh
-firstboot-tui
-ppp
-rp-pppoe
-system-config-network-tui
-syslinux
-pcsc-lite-libs
-pcmciautils
-pam_smb
-mkbootdisk
-jwhois
-ipsec-tools
-ed
-crash
-Deployment_Guide-en-US
-hal
-pm-utils
-dbus
-dbus-glib
%end
%post --nochroot
$SNIPPET('log_ks_post_nochroot')
%end
%post
$SNIPPET('log_ks_post')
# Start yum configuration
$yum_config_stanza
# End yum configuration
/usr/sbin/groupadd -g 54321 oinstall
/usr/sbin/groupadd -g 54322 dba
/usr/sbin/groupadd -g 54323 oper
/usr/sbin/groupadd -g 54324 backupdba
/usr/sbin/groupadd -g 54325 asmdba
/usr/sbin/groupadd -g 54326 dgdba
/usr/sbin/groupadd -g 54327 kmdba
/usr/sbin/groupadd -g 54328 asmadmin
/usr/sbin/groupadd -g 54329 asmoper
/usr/sbin/useradd -u 54322 -g oinstall -G asmadmin,asmdba grid
/usr/sbin/useradd -u 54323 -g oinstall -G dba,asmdba oracle
/usr/sbin/usermod -p "oracle" oracle
/usr/sbin/usermod -p "oracle" grid
$SNIPPET('post_install_kernel_options')
$SNIPPET('post_install_network_config')
$SNIPPET('func_register_if_enabled')
$SNIPPET('download_config_files')
$SNIPPET('koan_environment')
$SNIPPET('redhat_register')
$SNIPPET('cobbler_register')
# Enable post-install boot notification
$SNIPPET('post_anamon')
# Start final steps
$SNIPPET('kickstart_done')
# End final steps
%end
Fire up the vm on your workstation as in the previous post:
################## CONFIGURATION SECTION START ##############################
## The name for the VM in VirtualBox
VM_NAME="oel6_test"
## The VMHome for VirtualBox, usually in $HOME
VM_HOME="/media/dwe/My Passport/vm/${VM_NAME}"
## The hard disk to use for the VM, can be anywhere but might
## use up to 60gb. We need plenty of space for hosting the
## the OEL 6.6 yum repository and any source files for the oracle
## database and gi installation zip files
VM_HARDDISK="/media/dwe/My Passport/vm/${VM_NAME}/oel_test.vdi"
BOOTIMAGEFILE="test.iso"
# we get the boot iso directly from the cobbler server
BOOTIMAGEURL="http://192.168.56.101/iso_store/${BOOTIMAGEFILE}"
################## CONFIGURATION SECTION END ##############################
################ NO NEED TO EDIT FROM HERE ON ############################X
## Clean up everything before starting the setup
echo ".. Clean up everything before starting the setup"
vboxmanage unregistervm ${VM_NAME} --delete >> /dev/null 2>&1
vboxmanage closemedium disk "${VM_HARDDISK}" --delete >> /dev/null 2>&1
rm -f "${VM_HOME}/*"
mkdir -p "${VM_HOME}"
## Creating Virtual Box Maschine
echo ".. Creating Virtual Box Maschine"
vboxmanage createvm --name ${VM_NAME} --register --ostype Oracle_64
vboxmanage modifyvm ${VM_NAME} --boot1 disk
vboxmanage modifyvm ${VM_NAME} --boot2 dvd
## Creating Hard Disk for the Virtual Machine
echo "Creating Hard Disk for the Virtual Machine"
vboxmanage createhd --filename "${VM_HARDDISK}" --size 8192
## get the boot iso
cd "${VM_HOME}"
wget ${BOOTIMAGEURL}
## Creating a storage controller and attaching the Hard Disk to the VM
echo "Creating a storage controller and attaching the Hard Disk to the VM"
vboxmanage storagectl ${VM_NAME} --name ctl1 --add sata
vboxmanage storageattach ${VM_NAME} --storagectl ctl1 --type hdd --medium "${VM_HARDDISK}" --port 1
## Creating DVD Drive and attachinf the ISO
echo "Creating DVD Drive and attaching the ISO"
vboxmanage storagectl ${VM_NAME} --name ctl2 --add ide
vboxmanage storageattach ${VM_NAME} --storagectl ctl2 --type dvddrive --port 1 --device 1 --medium "${VM_HOME}/${BOOTIMAGEFILE}"
## Setting VM parameters
echo "Setting VM parameters"
vboxmanage modifyvm ${VM_NAME} --memory 1024 --cpus 1 --nic1 hostonly --cableconnected1 on --hostonlyadapter1 vboxnet0
vboxmanage modifyvm ${VM_NAME} --audio none
vboxmanage modifyvm ${VM_NAME} --usb off
echo "Booting and installing VM"
vboxmanage startvm ${VM_NAME} --type sdl
Sit back, relax and once the VM setup completed: login and check the users and groups:
id -a oracle
uid=54323(oracle) gid=54321(oinstall) groups=54321(oinstall),54322(dba),54325(asmdba)
id -a grid
uid=54322(grid) gid=54321(oinstall) groups=54321(oinstall),54325(asmdba),54328(asmadmin)
Wouldn’t it be nice to have the oracle installation source files already available on the newly created vms? Well, pretty easy:
On the cobbler server create a directory to host the files:
In the next post I’ll setup a complete lab environment depending on the amount of people who should attend the lab and depending on the amount of VMs they shall get.
Currently I am thinking on how to setup an oracle lab environment with minimal efforts providing the same configuration and standards to all lab machines. There are several tools around which might help in solving this:
docker is much in the press currently. but docker is not designed for systems that use shared storage like the grid infrastructure and rac. this might be possible somehow but it seemed too much of a hack to me. the same with vagrant. as soon as you want to have shared storage it gets difficult.
having thought about all the pros and cons I decided to give cobbler a try. as I do not have a real server to play around the cobbler machine as well as the test clients will be virtual box machines using the host only network adapter.
the below script will setup a virtual box vm using the vboxmanage command. just copy it to your linux workstation and adjust the variables at the top of the script (the script should be easily portable to windows ):
#!/bin/bash
################## CONFIGURATION SECTION START ##############################
## The name for the VM in VirtualBox
VM_NAME="oel6_staging"
## The VMHome for VirtualBox, usually in $HOME
VM_HOME="/media/dwe/My Passport/vm/${VM_NAME}"
## The hard disk to use for the VM, can be anywhere but might
## use up to 60gb. We need plenty of space for hosting the
## the OEL 6.6 yum repository and any source files for the oracle
## database and gi installation zip files
VM_HARDDISK="/media/dwe/My Passport/vm/${VM_NAME}/oel_staging.vdi"
## The path to the OEL 6.6 iso we are installing from
## The ISO can be downloaded from oracle edelivery at
## https://edelivery.oracle.com/linux
BOOTIMAGEFILE="$HOME/Downloads/OracleLinux-R6-U6-Server-x86_64-dvd.iso"
################## CONFIGURATION SECTION END ##############################
################ NO NEED TO EDIT FROM HERE ON ############################X
## Clean up everything before starting the setup
echo ".. Clean up everything before starting the setup"
vboxmanage unregistervm ${VM_NAME} --delete >> /dev/null 2>&1
vboxmanage closemedium disk "${VM_HARDDISK}" --delete >> /dev/null 2>&1
rm -f "${VM_HOME}/*"
## Creating Virtual Box Maschine
echo ".. Creating Virtual Box Maschine"
vboxmanage createvm --name ${VM_NAME} --register --ostype Oracle_64
vboxmanage modifyvm ${VM_NAME} --boot1 disk
vboxmanage modifyvm ${VM_NAME} --boot2 dvd
## Creating Hard Disk for the Virtual Machine
echo "Creating Hard Disk for the Virtual Machine"
vboxmanage createhd --filename "${VM_HARDDISK}" --size 61440
## Creating a storage controller and attaching the Hard Disk to the VM
echo "Creating a storage controller and attaching the Hard Disk to the VM"
vboxmanage storagectl ${VM_NAME} --name ctl1 --add sata
vboxmanage storageattach ${VM_NAME} --storagectl ctl1 --type hdd --medium "${VM_HARDDISK}" --port 1
## Creating DVD Drive and attachinf the ISO
echo "Creating DVD Drive and attaching the ISO"
vboxmanage storagectl ${VM_NAME} --name ctl2 --add ide
vboxmanage storageattach ${VM_NAME} --storagectl ctl2 --type dvddrive --port 1 --device 1 --medium "${BOOTIMAGEFILE}"
## Setting VM parameters
echo "Setting VM parameters"
vboxmanage modifyvm ${VM_NAME} --memory 1024 --cpus 1 --nic1 nat --cableconnected1 on
vboxmanage modifyvm ${VM_NAME} --natpf1 "guestssh,tcp,,2222,,22"
vboxmanage modifyvm ${VM_NAME} --audio none
vboxmanage modifyvm ${VM_NAME} --usb off
echo "Booting and installing VM"
vboxmanage startvm ${VM_NAME} --type sdl
once you execute the script the vm should start with the oracle linux iso attached to it. basically just click through the installer using a minimal installation and you’re done. for your reference here is a pdf with some screenshots:
note: the machine will use the NAT adapter for now as I need to install some packges from the internet. this would not be possible using the host only adapter we will use later on.
As you can see from the last screenhot in the attached pdf we only have the loopback adapter up and running. To enable the network click into the VM Console window and adjust the
ONBOOT parameter to yes:
To avoid any issues with iptables and selinux turn it off and reboot the machine:
chkconfig iptables off
service iptables stop
note: as of rhel/centos/oel 6.6 (or even earlier) is not possible to turn of selinux by editing the /etc/sysconfig/selinux file. to permanently turn it of we’ll need to adjust the grub configuration. To do this just add “enforcing=0” to each kernel line in /etc/grub.cong and reboot the machine:
So our base installation is ready. Let’s get the latest updates and install all the packages we need for operating cobbler.
Note: If you are behind a proxy, now it is the time to set it for yum being able to connect to the internet:
As everything we need is now installed and we do not need to connect to the internet anymore let’s switch the VM to the host only network.
Shutdown the machine:
shutdown -h 0
From your workstation adjust the VM configuration and start it up again:
Connections from the workstation to the VM from now on are established like:
ssh root@192.168.56.101
To be sure this IP does not change we change the VM network confiruation for eth0 from dhcp to static and restart the network:
# adjust the ifcfg-eth0 file
cat /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=static
IPADDR=192.168.56.101
# restart the network
service network restart
Time to make some basic cobbler configurations. The main cobbler configuration file is /etc/cobbler/settings.
For any new system that get’s generated by cobbler there needs to be a root password. Let’s create one and write it to the settings file (I’ll use admin123):
Now we are ready to enable and start the services:
chkconfig cobblerd on
chkconfig httpd on
service cobblerd start
service httpd start
Cobbler provides a check command which we now should use to see if we are fine with our configuration:
cobbler check
The following are potential configuration items that you may want to fix:
1 : SELinux is enabled. Please review the following wiki page for details on ensuring cobbler works correctly in your SELinux environment:
https://github.com/cobbler/cobbler/wiki/Selinux
2 : dhcpd is not installed
3 : some network boot-loaders are missing from /var/lib/cobbler/loaders, you may run 'cobbler get-loaders' to download them, or, if you only want to handle x86/x86_64 netbooting, you may ensure that you have installed a *recent* version of the syslinux package installed and can ignore this message entirely. Files in this directory, should you want to support all architectures, should include pxelinux.0, menu.c32, elilo.efi, and yaboot. The 'cobbler get-loaders' command is the easiest way to resolve these requirements.
4 : change 'disable' to 'no' in /etc/xinetd.d/rsync
5 : debmirror package is not installed, it will be required to manage debian deployments and repositories
6 : fencing tools were not found, and are required to use the (optional) power management features. install cman or fence-agents to use them
Restart cobblerd and then run 'cobbler sync' to apply changes.
We safely can ignore those messages for the moment.
Let’s do the first sync:
cobbler sync
...
task started: 2014-12-12_132415_sync
task started (id=Sync, time=Fri Dec 12 13:24:15 2014)
running pre-sync triggers
cleaning trees
removing: /var/lib/tftpboot/pxelinux.cfg/default
removing: /var/lib/tftpboot/grub/efidefault
removing: /var/lib/tftpboot/grub/images
removing: /var/lib/tftpboot/s390x/profile_list
copying bootloaders
copying: /usr/share/syslinux/pxelinux.0 -> /var/lib/tftpboot/pxelinux.0
copying: /usr/share/syslinux/menu.c32 -> /var/lib/tftpboot/menu.c32
copying: /usr/share/syslinux/memdisk -> /var/lib/tftpboot/memdisk
copying distros to tftpboot
copying images
generating PXE configuration files
generating PXE menu structure
rendering DHCP files
generating /etc/dhcp/dhcpd.conf
rendering TFTPD files
generating /etc/xinetd.d/tftp
cleaning link caches
running post-sync triggers
running python triggers from /var/lib/cobbler/triggers/sync/post/*
running python trigger cobbler.modules.sync_post_restart_services
running: dhcpd -t -q
received on stdout:
received on stderr:
running: service dhcpd restart
received on stdout: Starting dhcpd: [ OK ]
received on stderr:
running shell triggers from /var/lib/cobbler/triggers/sync/post/*
running python triggers from /var/lib/cobbler/triggers/change/*
running python trigger cobbler.modules.scm_track
running shell triggers from /var/lib/cobbler/triggers/change/*
Looks fine. Time to import our first distribution (which will be same ISO we used for setting up the VM). Go back to your workstation and copy the oel 6.6 iso to the vm:
cobbler distro list
oel66-x86_64
cobbler distro report --name=oel66-x86_64
Name : oel66-x86_64
Architecture : x86_64
TFTP Boot Files : {}
Breed : redhat
Comment :
Fetchable Files : {}
Initrd : /var/www/cobbler/ks_mirror/oel66-x86_64/images/pxeboot/initrd.img
Kernel : /var/www/cobbler/ks_mirror/oel66-x86_64/images/pxeboot/vmlinuz
Kernel Options : {}
Kernel Options (Post Install) : {}
Kickstart Metadata : {'tree': 'http://@@http_server@@/cblr/links/oel66-x86_64'}
Management Classes : []
OS Version : rhel6
Owners : ['admin']
Red Hat Management Key : <>
Red Hat Management Server : <>
Template Files : {}
Time to create a new system:
cobbler system add --name=test --profile=oel66-x86_64
cobbler system list
Let’s do some configuration for the newly created system:
cobbler system edit --name=test --interface=eth0 --ip-address=192.168.56.201 --netmask=255.255.255.0 --static=1 --dns-name=test.lab.ch
cobbler system edit --name=test --gateway=192.168.56.1 --hostname=test.lab.ch
cobbler system report test
Now we have an iso we can boot from and everything is retrieved from the cobbler server. Let’s try it on our workstation with the following script:
#!/bin/bash
################## CONFIGURATION SECTION START ##############################
## The name for the VM in VirtualBox
VM_NAME="oel6_test"
## The VMHome for VirtualBox, usually in $HOME
VM_HOME="/media/dwe/My Passport/vm/${VM_NAME}"
## The hard disk to use for the VM, can be anywhere but might
## use up to 60gb. We need plenty of space for hosting the
## the OEL 6.6 yum repository and any source files for the oracle
## database and gi installation zip files
VM_HARDDISK="/media/dwe/My Passport/vm/${VM_NAME}/oel_test.vdi"
BOOTIMAGEFILE="test.iso"
# we get the boot iso directly from the cobbler server
BOOTIMAGEURL="http://192.168.56.101/iso_store/${BOOTIMAGEFILE}"
################## CONFIGURATION SECTION END ##############################
################ NO NEED TO EDIT FROM HERE ON ############################X
## Clean up everything before starting the setup
echo ".. Clean up everything before starting the setup"
vboxmanage unregistervm ${VM_NAME} --delete >> /dev/null 2>&1
vboxmanage closemedium disk "${VM_HARDDISK}" --delete >> /dev/null 2>&1
rm -rf "${VM_HOME}"
mkdir -p "${VM_HOME}"
## Creating Virtual Box Maschine
echo ".. Creating Virtual Box Maschine"
vboxmanage createvm --name ${VM_NAME} --register --ostype Oracle_64
vboxmanage modifyvm ${VM_NAME} --boot1 disk
vboxmanage modifyvm ${VM_NAME} --boot2 dvd
## Creating Hard Disk for the Virtual Machine
echo "Creating Hard Disk for the Virtual Machine"
vboxmanage createhd --filename "${VM_HARDDISK}" --size 8192
## get the boot iso
cd "${VM_HOME}"
wget ${BOOTIMAGEURL}
## Creating a storage controller and attaching the Hard Disk to the VM
echo "Creating a storage controller and attaching the Hard Disk to the VM"
vboxmanage storagectl ${VM_NAME} --name ctl1 --add sata
vboxmanage storageattach ${VM_NAME} --storagectl ctl1 --type hdd --medium "${VM_HARDDISK}" --port 1
## Creating DVD Drive and attachinf the ISO
echo "Creating DVD Drive and attaching the ISO"
vboxmanage storagectl ${VM_NAME} --name ctl2 --add ide
vboxmanage storageattach ${VM_NAME} --storagectl ctl2 --type dvddrive --port 1 --device 1 --medium "${VM_HOME}/${BOOTIMAGEFILE}"
## Setting VM parameters
echo "Setting VM parameters"
vboxmanage modifyvm ${VM_NAME} --memory 1024 --cpus 1 --nic1 hostonly --cableconnected1 on
vboxmanage modifyvm ${VM_NAME} --audio none
vboxmanage modifyvm ${VM_NAME} --usb off
echo "Booting and installing VM"
vboxmanage startvm ${VM_NAME} --type sdl
Choose “test” in the boot screen, sit back and see how the vm gets installed :)
Time to close this first post on the topic. In the next part I’ll show how to create customer cobbler profile, attach a custom kickstart file to it and fire up a VM that has all the oracle required OS packages and users alreay installed and configured.
Metric evaluation error start – EMS-01076: Unexpected error while deploying a plug-in in sync mode for emd url https://hostname:3872/emd/main/: EMS-01015: Error occurred while deploying AGENT plug-in.
if you get the above issue while trying to add targets to cloud control: check the mount options for the /tmp filesystem. if there is the “noexec” attribute this is probably the issue.