Oracle Clusterware
There is a number of Linux operating systems that support the Oracle Database, and they are all excellent in their own right. I have always favored using Oracle Linux and over the years it has become easier and easier to install and configure complex database configurations on Oracle Linux with minimum effort. With Oracle Linux, you can get different levels of support and with these levels you get a number of extra options available to you at no additional cost (for these options), meaning if you, for example, pay for Oracle Linux basic or premier support you get Oracle Clusterware! This is amazing software for you to use just because you have Oracle Linux paid support agreement (Basic or Premier), nice!
Now taking this into account, let's start thinking a little outside the box and getting a little more out of your investment. There is a lot you can do here. Many companies nowadays are looking at more highly available solutions, and with Oracle Linux and Oracle Clusterware, you have something extra now in your arsenal that you can use to help you achieve this. Using these two components to create a cluster configuration where shared storage is used to help you create an Active / Passive cluster for your application to run on… but why not use this for your Oracle Standard Edition 2 (SE2) database as well? This is what I would like to explore more in this blog series. How you can configure an Oracle Linux 7 cluster consisting out of two nodes using shared storage and Oracle Clusterware, then install the SE2 database software and create an Active / Passive cluster configuration where the database instance will only run on one node at a time.
It is, however, important to note that I am not talking about Oracle RAC or Oracle RAC One Node here - These are amazing technologies that will require Oracle Enterprise Edition in 19c onwards as Oracle RAC is no longer available as an option for the SE2 software set. In this blog series, I am specifically going to show you how you can enable a two-node cluster running for your SE2 database in an Active / Passive configuration.
Without getting into the details of the licensing, when you run Oracle SE2 you are allowed a maximum of two CPU sockets per server, and in this scenario we will use two servers each with two CPU sockets, but remember the database instance will only run on one of the servers in the cluster at any given point in time (in an “active/passive” configuration) - and based on licensing of data recovery environments as discussed in this document -
http://www.oracle.com/us/corporate/pricing/data-recovery-licensing-070587.pdf, you are able to run the database instance on the second node in the cluster for a maximum of 10 days per year. As always, please check your licensing with your Oracle account manager.
Now to add a little extra to the configuration - as I think everyone should always make sure they protect their databases against disaster; I will also in the last part of the series show you how you can configure Dbvisit Standby in this configuration - running as cluster resource, to help protect your primary database in case of disaster strikes.
In the examples I will make use of Oracle Clusterware 19c and Oracle 19c SE2 Database and configure it in an Active / Passive cluster where the single instance SE2 database is running on primary node 1, but in case of failure on that node the database will be failed over “as a cluster resource” to the second node in the cluster - this can be summarised in the following high-level diagram:
The blog series will cover the following topics:
Configuration the Oracle Linux nodes using Oracle Linux 7
Installing and configuration of the Oracle Clusterware - version 19c
Configuration of Oracle ACFS (cluster filesystem)
Installing the Oracle Database software - 19c SE2
Creating a single instance database and adding it as a cluster resource
Creating a Virtual IP that will be running on the “Active” node
Installing Dbvisit Standby version 9 and adding it as cluster resources
Creating a Standby Database on a 3rd server running Oracle Linux 7 with Oracle SE2 Database software installed
No before I get into the details - there are a few key requirements for this type of configuration I do need to mention:
Two Oracle Linux 7 (7.6) servers with latest updates applied
Shared Storage that is presented to both nodes
Two network interfaces in each node:
A DNS server
You ideally want to use a DNS server to add all the various IP addresses that are used by the cluster.
In my case, I just created a small BIND DNS server
In my test configuration, I will use the following IP ranges and these values are in my DNS server, but I also added them to my local /etc/hosts file.
10.0.2.21 oel7clus1.oraclekiwi.co.nz oel7clus1 #Primary Node 1
10.0.2.22 oel7clus2.oraclekiwi.co.nz oel7clus2 #Primary Node 2
10.0.2.23 oel7clus1-vip.oraclekiwi.co.nz oel7clus1-vip
10.0.2.24 oel7clus2-vip.oraclekiwi.co.nz oel7clus2-vip
# 10.0.2.25 oel7clus-scan.oraclekiwi.co.nz oel7clus-scan ## The SCAN details in DNS
# 10.0.2.26 oel7clus-scan.oraclekiwi.co.nz oel7clus-scan
# 10.0.2.27 oel7clus-scan.oraclekiwi.co.nz oel7clus-scan
10.0.2.28 oel7clus-dbvip.oraclekiwi.co.nz oel7clus-dbvip # Cluster IP for Dbvisit
10.0.2.29 oel7fs1.oraclekiwi.co.nz oel7fs1 # Standby Server |
Step 1: Oracle Linux preparation
Now in my configuration, I used Oracle Linux 7 update 6. I will not get into the details of installing Linux, but it really is easy and in most cases, you can just accept the defaults.
I do want to highlight that I used a 60GB disk and allocated 40G of the disk for /u01 filesystem, mainly as I use the /u01 filesystem a lot for various things and this is where I will configure Oracle Clusterware to run from.
I also make sure I assign enough memory and CPU to Virtual Machines. In VirtualBox management settings I added 8GB of memory.
You can play with the CPU settings, but this varies depending on your CPU options, example if you have 1 CPU with 4 cored and hyper threading enabled you will get more available (8 virtual CPUs).
I would recommend for testing you set the value to at least 2 on each Virtual Machine. For reference my disk layout after installation looked like the following:
root@oel7clus1 ~ : df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 3.8G 0 3.8G 0% /dev
tmpfs 3.8G 0 3.8G 0% /dev/shm
tmpfs 3.8G 89M 3.7G 3% /run
tmpfs 3.8G 0 3.8G 0% /sys/fs/cgroup
/dev/sda3 6.3G 3.6G 2.4G 61% /
tmpfs 4.0G 4.0K 4.0G 1% /tmp
/dev/sda1 488M 279M 174M 62% /boot
/dev/mapper/oravg-u01 40G 49M 38G 1% /u01
tmpfs 771M 0 771M 0% /run/user/0
tmpfs 771M 0 771M 0% /run/user/1000 |
I always prefer to use role separation when installing Oracle Clusterware and Database software, so I will directly after installing my Virtual Machines, add the “oracle” and “grid” users. This can easily be done by the following commands:
groupadd -g 501 oinstall
groupadd -g 502 dba
groupadd -g 503 oper
groupadd -g 504 asmadmin
groupadd -g 505 asmdba
groupadd -g 506 asmoper
groupadd -g 507 bkpdba
groupadd -g 508 dgdba
groupadd -g 509 kmdba
groupadd -g 510 racdba
useradd -m -g oinstall -G oinstall,dba,oper,asmadmin,asmdba,bkpdba,dgdba,kmdba -u 501 oracle
useradd -m -g oinstall -G oinstall,dba,asmadmin,asmdba,asmoper -u 502 grid |
There are a few more configuration options required, but first, let's update all required software on the cluster nodes.
You should follow the recommendation of the Oracle Clusterware and Oracle Database installation documentation to make sure you have all the required operating system packages installed.
In this case, I took the easier way and based on past experience and installed the following via the yum package installer.
Please note that in this list I am actually installing a little bit extra, mainly as I tend to use some of these extra tools and why not just do it all at the same time and keep it simple.
So here is my list of options:
yum -y --enablerepo=ol7_latest,ol7_optional_latest,ol7_developer_EPEL install dkms ocfs2-tools-devel ocfs2-tools reflink btrfs-progs btrfs-progs-devel oracleasm-support kmod-oracleasm git rpmbuild patch bc binutils bzip2 bzip2-devel dos2unix gcc gcc-c++ glibc-static gzip libaio libaio-devel libgcc libstdc++ libstdc++-devel lsof make man man-pages net-tools nss-devel perl strace subversion sudo sysstat tar tree unix2dos unzip wget which zip zlib zlib-devel zlib-static gettext tcl tcl-devel openssh openssh-server xclock xhost psmisc xorg-x11-fonts* xorg-x11-server-Xvfb.x86_64 xterm libvncserver xorg-x11-server-devel avahi-libs tk libXtst gtk3 mesa-dri-drivers xorg-x11-utils screen bind-libs bind-utils compat-libcap1 compat-libstdc++-33 e2fsprogs-libs ethtool gssproxy keyutils ksh libaio-devel libbasicobjects libcollection libevent libini_config libnfsidmap libpath_utils libref_array libtirpc libverto-libevent mailx nfs-utils quota quota-nls rpcbind smartmontools tcp_wrappers gdb patch libnfs-utils nfs-utils dejavu* |
Now that I have the above installed on each node, I make sure all is up to date by running “yum update” and then perform a reboot on each system.
The next steps I perform are the following and to make it easier the list is done below so that you can just copy and paste it - but please NOTE, test this first and do not just run this on production without making sure you understand what it does.
As I am just creating a TEST configuration, I will assign a basic password to the oracle and grid user as well as add “sudo” permission without requiring a password.
Also please note some of these values will need review for production environments and they are discussed in more detail in the online Oracle Database and Grid Infrastructure installation guides for Linux, so please make sure you review these prior to any production configurations.
cat >> /etc/pam.d/login <<EOF
session required pam_limits.so
EOF
echo 'oracle:Kiwi123' | chpasswd
echo 'grid:Kiwi123' | chpasswd
echo "oracle ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers.d/oracle
echo "grid ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers.d/grid
sed -i "s/^.*requiretty/#Defaults requiretty/" /etc/sudoers
cat >> /etc/sysctl.conf <<EOF
fs.aio-max-nr = 3145728
fs.file-max = 6815744
kernel.shmall = 1073741824
kernel.shmmax = 4398046511104
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
kernel.msgmax = 8192
kernel.msgmnb = 65536
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default=262144
net.core.rmem_max=4194304
net.core.wmem_default=262144
net.core.wmem_max=1048586
EOF
cat >> /etc/security/limits.conf <<EOF
grid soft nproc 2047
grid hard nproc 16384
grid soft nofile 4096
grid hard nofile 65536
oracle soft nofile 131072
oracle hard nofile 131072
oracle soft nproc 131072
oracle hard nproc 131072
oracle soft core unlimited
oracle hard core unlimited
oracle soft memlock 50000000
oracle hard memlock 50000000
EOF
cat >> /etc/profile <<EOF
if [ \$USER = "oracle" ] || [ \$USER = "grid" ]; then
if [ $SHELL = "/bin/ksh" ]; then
ulimit -p 16384
ulimit -n 65536
ulimit -s 10240
else
ulimit -u 16384 -n 65536 -s 10240
fi
umask 022
fi
EDITOR=vi;
export EDITOR
PS1="\u@\h\w$";
PS2="\u@ \h --> ";
export PS1
export PS2
NLS_DATE_FORMAT="dd/mm/yyyy:hh24:mi:ss"
export NLS_DATE_FORMAT
EOF
cat >> /etc/sysconfig/network <<EOF
NOZEROCONF=yes
|
The last two things I do on the two primary cluster nodes are to update their /etc/hosts with the following (now remember, I do have the DNS configured as well and my /etc/resolv.conf does point to my DNS server):
10.0.2.21 oel7clus1.oraclekiwi.co.nz oel7clus1 #Primary Node 1
10.0.2.22 oel7clus2.oraclekiwi.co.nz oel7clus2 #Primary Node 2
10.0.2.23 oel7clus1-vip.oraclekiwi.co.nz oel7clus1-vip
10.0.2.24 oel7clus2-vip.oraclekiwi.co.nz oel7clus2-vip
# 10.0.2.25 oel7clus-scan.oraclekiwi.co.nz oel7clus-scan ## The SCAN details in DNS
# 10.0.2.26 oel7clus-scan.oraclekiwi.co.nz oel7clus-scan
# 10.0.2.27 oel7clus-scan.oraclekiwi.co.nz oel7clus-scan
10.0.2.28 oel7clus-dbvip.oraclekiwi.co.nz oel7clus-dbvip # Cluster IP for Dbvisit
10.0.2.29 oel7fs1.oraclekiwi.co.nz oel7fs1 # Standby Server |
My primary hosts are called
oel7clus1 and
oel7clus2 and I am just using my test domain
oraclekiwi.co.nzThen a crucial requirement for running Clusterware is that you must make sure you have SSH user equivalence configured.
In short, make sure that the oracle and grid user on both nodes can ssh between the nodes without any password or passphrase being requested. This is a standard requirement and really easy to do.
You can run the following commands on the two nodes first as the grid user then the oracle user to get this done - you will be prompted for confirmation and the user password while doing this - remember do not add a passphrase when prompted, just press enter.
ssh-keygen -t rsa
ssh-copy-id -i ~/.ssh/id_rsa.pub oel7clus1
ssh-copy-id -i ~/.ssh/id_rsa.pub oel7clus2 |
Once you have done the above, you should be able to SSH between the two systems without any passwords, example:
grid@oel7clus1 ~ : ssh oel7clus1
Last login: Sun Sep 1 14:28:22 2019
grid@oel7clus1 ~ : ssh oel7clus2
Last login: Thu Aug 22 13:58:37 2019
grid@oel7clus2 ~ : exit
logout
Connection to oel7clus2 closed.
grid@oel7clus1 ~ : exit
logout
Connection to oel7clus1 closed.
grid@oel7clus1 ~ : |
Step 2: Configure Disks
Before we want to get into starting the Oracle Clusterware installation, we need to make sure that we have the required shared disk to be used in the cluster.
In this test setup, I will make use of ASM and the ACFS filesystem. But before we get to that I need to add the disks. Now in this configuration, I added two disks.
1 disk at 3GB in size - this will be the disk hosting the OCR and will also be the Voting Disk in the cluster.
1 disk at 30GB in size - which will be configured as an ACFS filesystem to host my Database Software and Database Files.
As mentioned before I use VirtualBox in this example and I add the above-mentioned disks to node 1 (Make sure you use fixed-size not dynamic) then I set them to be sharable and then add them to the second node (when adding the disk state you want to use an existing disk and then add the two disks allocated to node 1 - but remember they must be set to sharable in the VirtualBox virtual media manager).
Now the two disks are seen as “/dev/sdb” and “/dev/sdc” in my configuration. You can easily get the details by running “fdisk -l”
Then from node 1 in the cluster, I run the following commands to partition the disk for use:
fdisk /dev/sdb
o
n
p
1
< press enter
< press enter
w |
Example:
root@oel7clus1 ~ : fdisk /dev/sdb
Welcome to fdisk (util-linux 2.23.2).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Device does not contain a recognized partition table
Building a new DOS disklabel with disk identifier 0x44bf62a8.
Command (m for help): o
Building a new DOS disklabel with disk identifier 0x1bc47716.
Command (m for help): n
Partition type:
p primary (0 primary, 0 extended, 4 free)
e extended
Select (default p): p
Partition number (1-4, default 1): 1
First sector (2048-6291455, default 2048):
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-6291455, default 6291455):
Using default value 6291455
Partition 1 of type Linux and of size 3 GiB is set
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
root@oel7clus1 ~ : |
I repeat the same for the second disk - /dev/sdc
I then end up with the partitions /dev/sdb1 and /dev/sdc1 which I will now use to create ASM disks from.
There are a few different options available to do this and everyone has their preference.
I like to use ASMLIB as I find it really easy to work with, but there are other options available to you which you can review.
But in this example, I will use ASMLIB. Let's first run “oracleasm configure” as the root user to configure it to run as the “grid” user with the correct group and to start on system boot:
Example:
root@oel7clus1 ~ : oracleasm configure -i
Configuring the Oracle ASM library driver.
This will configure the on-boot properties of the Oracle ASM library
driver. The following questions will determine whether the driver is
loaded on boot and what permissions it will have. The current values
will be shown in brackets ('[]'). Hitting <ENTER> without typing an
answer will keep that current value. Ctrl-C will abort.
Default user to own the driver interface [grid]: grid
Default group to own the driver interface [asmadmin]: asmadmin
Start Oracle ASM library driver on boot (y/n) [y]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done
root@oel7clus1 ~ : systemctl restart oracleasm |
Do make sure to do this on both the cluster nodes.
Next on node 1, let's mark the two disks we added as Oracle ASM disks.
As the “root” user run the following commands:
root@oel7clus1 ~ : oracleasm createdisk ASMDISK1 /dev/sdb1
Writing disk header: done
Instantiating disk: done
root@oel7clus1 ~ : oracleasm createdisk ASMDISK2 /dev/sdc1
Writing disk header: done
Instantiating disk: done
root@oel7clus1 ~ : oracleasm listdisks
ASMDISK1
ASMDISK2 |
Once you have this downloaded, you can use the oracleasm-discover command to see your disks:
root@oel7clus1 ~ : oracleasm-discover
Using ASMLib from /opt/oracle/extapi/64/asm/orcl/1/libasm.so
[ASM Library - Generic Linux, version 2.0.12 (KABI_V2)]
Discovered disk: ORCL:ASMDISK1 [6289408 blocks (3220176896 bytes), maxio 1024, integrity none]
Discovered disk: ORCL:ASMDISK2 [62912512 blocks (32211206144 bytes), maxio 1024, integrity none] |
The above steps might seem daunting and a lot to do, but it is not that difficult and once you have done this once or twice it becomes really easy and things start making more sense.
We are ready to start the installation of Oracle Clusterware.
Step 3: Installing Oracle Clusterware
Now we are getting to the fun stuff. Make sure you have completed the pre-requisite steps and that you have your networking and DNS configured as well as that the grid Unix account can ssh between nodes without a password or passphrase.
On both nodes, let's pre-create a few directories, this is where we will install Oracle Clusterware.
As the root user:
cd /u01
mkdir -p app
cd app
mkdir grid 19.3.0 oracle
cd /u01
chown -R grid:oinstall app
chmod -R 775 app
cd app
chown -R oracle:oinstall oracle |
In summary under the /u01 folder (mount point) on both systems you should have the following directory structure:
grid@oel7clus1 /u01 : tree app
app
├── 19.3.0
├── grid
└── oracle
3 directories, 0 files |
Now, as the “grid” Unix user, on only node 1 (oel7clus1) extract the Grid Infrastructure software into the /u01/app/19.3.0 folder using the “unzip -q” command.
In my example, I have the software available as zip files in /install
grid@oel7clus1 /u01/app/19.3.0 : unzip -q /install/LINUX.X64_193000_grid_home.zip
grid@oel7clus1 /u01/app/19.3.0 :
grid@oel7clus1 /u01/app/19.3.0 : ls
addnode cv env.ora instantclient lib ord precomp root.sh sqlpatch utl
assistants dbjava evm inventory md ords QOpatch root.sh.old sqlplus welcome.html
bin dbs gpnp javavm network oss qos root.sh.old.1 srvm wlm
cha deinstall gridSetup.sh jdbc nls oui racg rootupgrade.sh suptools wwg
clone demo has jdk OPatch owm rdbms runcluvfy.sh tomcat xag
crs diagnostics hs jlib opmn perl relnotes sdk ucp xdk
css dmu install ldap oracore plsql rhp slax usm
grid@oel7clus1 /u01/app/19.3.0 : |
Tip:
Make sure you install the cvpudisk package located in the cv/rpm subfolder on both cluster nodes. As sudo is allowed for the grid user, you can just use the following commands to do this
grid@oel7clus1 /u01/app/19.3.0 : cd cv/rpm/
grid@oel7clus1 /u01/app/19.3.0/cv/rpm : ls
cvuqdisk-1.0.10-1.rpm
grid@oel7clus1 /u01/app/19.3.0/cv/rpm : sudo rpm -ivh cvuqdisk-1.0.10-1.rpm
Preparing... ################################# [100%]
package cvuqdisk-1.0.10-1.x86_64 is already installed |
Now in my configuration, I already installed the package on both nodes, but you might have to install this package on both your nodes. You can “scp” the file from the primary node where it is extracted into the second node /tmp and install from there. Do not extract the full Grid install (zip) file on the second node, the software will be copied and configured there during the installer process which we will run next.
Starting the Installer:
To start the installer, as the “grid” user, navigate to the ORACLE_HOME location for Clusterware which is /u01/app/19.3.0 in this example. Then run “./gridSetup.sh”
The steps and screenshots below will take you through the configuration of Oracle Clusterware on a standalone cluster.
grid@oel7clus1 /u01/app/19.3.0 : ./gridSetup.sh
Launching Oracle Grid Infrastructure Setup Wizard... |
Now are getting to the next step where you need to supply the cluster name as well as the SCAN name and port, remember you have to have these in the DNS server.
For more detail on SCAN and SCAN listeners please see the Oracle Documentation as this is a big topic in itself
Next, we get the screen where we need to supply the second node in the cluster’s details, this would be the public hostname as well as the Virtual IP/Hostname.
Once you have supplied the details - and once again, these should be in your DNS server and resolvable.
Next, we get the option to not use the Grid Infrastructure Management repository. Now as this is a test configuration, I do not need this and will leave it out as this will save a little on resources, but for production configurations, you most likely will enable this as with it there are a number of benefits as listed on the screen.
Now we get to the Disk Groups, we first give the 1st Disk Group the name OCR_VOT (which is just a name I gave it to note this Disk Group is used for the OCR and Voting disks. It is a small disk and I select redundancy to external - you might want to review the Oracle Documentation on the redundancy levels. Note that depending on your redundancy level you may require twice the disk space.
It might be that you do not see any disks to select (marked 5 below) - if that is the case, select “Change Discovery Path” marked with 3 below and set the Discovery Path to ORCL:* or whatever the path might be on your system, but as I am using ASMLIB in this example, I can use this path and it should then pick up the provisioned disks.
I then select the first smaller disk only and click on next to continue.
Next you need to specify the ASM instance passwords. Now you should make sure you give it a complex password and depending on your configuration you may want to use various different passwords for the SYS and ASMSNMP accounts.
In this example, I will use the same password for both
Next I do want to execute the configuration scripts automatically - these are run as the root user, so you will need to specify the root password, or if you have “sudo” configured you can use that as well.
In my configuration I did get a few checks that failed, after review, I found that my ASM devices were correct even though it was reported, the resolv.conf file where my DNS configuration is pointed was also correct. The Memory and swap space will specify recommended values, I am fine with the values on my test system and selected “Ignore All” and continued. In production configurations, you do want to make sure you meet the requirements.
After a while, you will be prompted to confirm you still want to run the configuration scripts as the root user.
I do find this strange, as I did agree to this a few screens before, but all you have to do is click yes and the installer continues. Again note that this process can take time.
Excellent, installation complete and I can quickly verify this by seeing ASM instances running on both my cluster nodes.
I can use the “crsctl status resource” command to get full details on my cluster resources, for example:
grid@oel7clus1 /u01/app/19.3.0 : crsctl status resource -t
--------------------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER.lsnr
ONLINE ONLINE oel7clus1 STABLE
ONLINE ONLINE oel7clus2 STABLE
ora.chad
ONLINE ONLINE oel7clus1 STABLE
ONLINE ONLINE oel7clus2 STABLE
ora.net1.network
ONLINE ONLINE oel7clus1 STABLE
ONLINE ONLINE oel7clus2 STABLE
ora.ons
ONLINE ONLINE oel7clus1 STABLE
ONLINE ONLINE oel7clus2 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr(ora.asmgroup)
1 ONLINE ONLINE oel7clus1 STABLE
2 ONLINE ONLINE oel7clus2 STABLE
3 OFFLINE OFFLINE STABLE
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE oel7clus2 STABLE
ora.LISTENER_SCAN2.lsnr
1 ONLINE ONLINE oel7clus1 STABLE
ora.LISTENER_SCAN3.lsnr
1 ONLINE ONLINE oel7clus1 STABLE
ora.OCR_VOT.dg(ora.asmgroup)
1 ONLINE ONLINE oel7clus1 STABLE
2 ONLINE ONLINE oel7clus2 STABLE
3 OFFLINE OFFLINE STABLE
ora.asm(ora.asmgroup)
1 ONLINE ONLINE oel7clus1 Started,STABLE
2 ONLINE ONLINE oel7clus2 Started,STABLE
3 OFFLINE OFFLINE STABLE
ora.asmnet1.asmnetwork(ora.asmgroup)
1 ONLINE ONLINE oel7clus1 STABLE
2 ONLINE ONLINE oel7clus2 STABLE
3 OFFLINE OFFLINE STABLE
ora.cvu
1 ONLINE ONLINE oel7clus1 STABLE
ora.oel7clus1.vip
1 ONLINE ONLINE oel7clus1 STABLE
ora.oel7clus2.vip
1 ONLINE ONLINE oel7clus2 STABLE
ora.qosmserver
1 ONLINE ONLINE oel7clus1 STABLE
ora.scan1.vip
1 ONLINE ONLINE oel7clus2 STABLE
ora.scan2.vip
1 ONLINE ONLINE oel7clus1 STABLE
ora.scan3.vip
1 ONLINE ONLINE oel7clus1 STABLE
--------------------------------------------------------------------------------
grid@oel7clus1 /u01/app/19.3.0 : |
Now that we have Oracle Clusterware installed, we will need one more thing before we can use ACFS - we need to apply the latest patch update to get the Clusterware software up to date and to make sure that the Linux Kernel is supported for running ACFS filesystem. For more detail on ACFS and required patches to match kernel versions please see Oracle Support. Now as I am using one of the latest Oracle Linux 7 updates, I do need to apply patches to allow Oracle Clusterware to use ACFS. Do not be scared, this is not a difficult process and in the next step, I will quickly take you through this step.
But before I do this, a quick way to see if you can create ACFS filesystems is to run the “asmca” utility - and as you can see in my configuration I cannot see ACFS or ADMV options at all, so lets first install the patch.
Step 4: Apply Patch to Oracle Clusterware (if required)
As mentioned in my configuration as I am using the latest Oracle Linux 7 with all its updates I will need to apply the latest Oracle Grid Infrastructure patch to make sure I can use ACFS - which I would like to show you in this example.
Now the patch I will be applying is patch number: 29708769
Please see Oracle support for more detail on this patch.
I have downloaded the patch and it is shared via NFS folder on both my nodes. After reviewing and running the required Patch tests I applied the patch first to node 1 then once completed to node 2 with the following commands:
root@oel7clus1 ~ : . oraenv
ORACLE_SID = [root] ? +ASM1
The Oracle base has been set to /u01/app/grid
root@oel7clus1 ~ : ls /u01/app
19.3.0 grid oracle oraInventory
root@oel7clus1 ~ : export PATH=/u01/app/19.3.0/OPatch:$PATH
root@oel7clus1 ~ : which opatch
/u01/app/19.3.0/OPatch/opatch
root@oel7clus1 ~ : cd
root@oel7clus1 ~ : opatchauto apply /patch/patches/
29708769/ PatchSearch.xml
root@oel7clus1 ~ : opatchauto apply /patch/patches/29708769 -oh /u01/app/19.3.0
OPatchauto session is initiated at Sun Sep 1 16:14:04 2019
System initialization log file is /u01/app/19.3.0/cfgtoollogs/opatchautodb/systemconfig2019-09-01_04-14-07PM.log.
Session log file is /u01/app/19.3.0/cfgtoollogs/opatchauto/opatchauto2019-09-01_04-14-16PM.log
The id for this session is QTRD
Executing OPatch prereq operations to verify patch applicability on home /u01/app/19.3.0
Patch applicability verified successfully on home /u01/app/19.3.0
Bringing down CRS service on home /u01/app/19.3.0
CRS service brought down successfully on home /u01/app/19.3.0
Start applying binary patch on home /u01/app/19.3.0
Binary patch applied successfully on home /u01/app/19.3.0
Starting CRS service on home /u01/app/19.3.0
CRS service started successfully on home /u01/app/19.3.0
OPatchAuto successful.
--------------------------------Summary--------------------------------
Patching is completed successfully. Please find the summary as follows:
Host:oel7clus1
CRS Home:/u01/app/19.3.0
Version:19.0.0.0.0
Summary:
==Following patches were SKIPPED:
Patch: /patch/patches/29708769/29401763
Reason: This patch is already been applied, so not going to apply again.
==Following patches were SUCCESSFULLY applied:
Patch: /patch/patches/29708769/29834717
Log: /u01/app/19.3.0/cfgtoollogs/opatchauto/core/opatch/opatch2019-09-01_16-17-15PM_1.log
Patch: /patch/patches/29708769/29850993
Log: /u01/app/19.3.0/cfgtoollogs/opatchauto/core/opatch/opatch2019-09-01_16-17-15PM_1.log
Patch: /patch/patches/29708769/29851014
Log: /u01/app/19.3.0/cfgtoollogs/opatchauto/core/opatch/opatch2019-09-01_16-17-15PM_1.log
OPatchauto session completed at Sun Sep 1 16:27:04 2019
Time taken to complete the session 13 minutes, 1 second
root@oel7clus1 ~ : |
Then follow the same steps on the second node:
root@oel7clus2 /patch : . oraenv
ORACLE_SID = [root] ? +ASM2
The Oracle base has been set to /u01/app/grid
root@oel7clus2 /patch : export PATH=/u01/app/19.3.0/OPatch:$PATH
root@oel7clus2 /patch : which opatch
/u01/app/19.3.0/OPatch/opatch
root@oel7clus2 /patch : opatchauto apply /patch/patches/29708769 -oh /u01/app/19.3.0
OPatchauto session is initiated at Sun Sep 1 17:05:18 2019
System initialization log file is /u01/app/19.3.0/cfgtoollogs/opatchautodb/systemconfig2019-09-01_05-05-21PM.log.
Session log file is /u01/app/19.3.0/cfgtoollogs/opatchauto/opatchauto2019-09-01_05-05-31PM.log
The id for this session is 6418
Executing OPatch prereq operations to verify patch applicability on home /u01/app/19.3.0
Patch applicability verified successfully on home /u01/app/19.3.0
Bringing down CRS service on home /u01/app/19.3.0
CRS service brought down successfully on home /u01/app/19.3.0
Start applying binary patch on home /u01/app/19.3.0
Binary patch applied successfully on home /u01/app/19.3.0
Starting CRS service on home /u01/app/19.3.0
CRS service started successfully on home /u01/app/19.3.0
OPatchAuto successful.
--------------------------------Summary--------------------------------
Patching is completed successfully. Please find the summary as follows:
Host:oel7clus2
CRS Home:/u01/app/19.3.0
Version:19.0.0.0.0
Summary:
==Following patches were SKIPPED:
Patch: /patch/patches/29708769/29401763
Reason: This patch is already been applied, so not going to apply again.
==Following patches were SUCCESSFULLY applied:
Patch: /patch/patches/29708769/29834717
Log: /u01/app/19.3.0/cfgtoollogs/opatchauto/core/opatch/opatch2019-09-01_17-08-19PM_1.log
Patch: /patch/patches/29708769/29850993
Log: /u01/app/19.3.0/cfgtoollogs/opatchauto/core/opatch/opatch2019-09-01_17-08-19PM_1.log
Patch: /patch/patches/29708769/29851014
Log: /u01/app/19.3.0/cfgtoollogs/opatchauto/core/opatch/opatch2019-09-01_17-08-19PM_1.log
OPatchauto session completed at Sun Sep 1 17:19:08 2019
Time taken to complete the session 13 minutes, 50 seconds
root@oel7clus2 /patch : |
Step 5: Adding the DATA Diskgroup and ACFS Filesystem
Now that patching is complete, we can review the status of the “asmca” utility to see if we get the options for ADMV and ACFS.
On the first node, I will start “asmca” as the grid user, the following screen will show:
As you can see you now get options to configure Volumes and ACFS filesystems.
But before we get into that, we need to do one thing, we need to create a new ASM diskgroup using our 30GB disk that we shared between the two nodes.
To do this we navigate to Disk Groups and click on “Create"
Now when looking at the two hosts I can see the shared storage /u02 folder mounted
grid@oel7clus1 ~ : df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 3.8G 0 3.8G 0% /dev
tmpfs 3.8G 845M 3.0G 22% /dev/shm
tmpfs 3.8G 257M 3.6G 7% /run
tmpfs 3.8G 0 3.8G 0% /sys/fs/cgroup
/dev/sda3 6.3G 3.7G 2.3G 63% /
tmpfs 4.0G 31M 4.0G 1% /tmp
/dev/sda1 488M 279M 174M 62% /boot
/dev/mapper/oravg-u01 40G 10G 28G 27% /u01
tmpfs 771M 0 771M 0% /run/user/0
/dev/sdd1 20G 7.3G 12G 40% /install
tmpfs 771M 0 771M 0% /run/user/502
/dev/asm/acfsvol1-145 25G 600M 25G 3% /u02 |
Step 6: Installing the Oracle Database SE2 Software
Now we get to the part where we will install the Oracle Database SE2 software into the shared storage /u02 location.
Now it is possible to install the database software on the local disk on both the cluster nodes, example, have the exact same install in /u01/app/oracle/product/19.3.0/dbhome_1 - or you can make use of a shared Oracle Home and have the Oracle Database software installed on the Shared Disk - meaning on /u02.
In this example I will make use of the second option, and install the Oracle Database software in /u02 filesystem.
As the “oracle” Unix account, first create the path for the Oracle Database 19c SE2 software home folder, then extract the downloaded install file into this location, example:
oracle@oel7clus1 ~ : cd /u02
oracle@oel7clus1 /u02 : ls
lost+found
oracle@oel7clus1 /u02 : mkdir -p app/oracle/product/19.3.0/dbhome_1
oracle@oel7clus1 /u02 : cd app/oracle/product/19.3.0/dbhome_1/
oracle@oel7clus1 /u02/app/oracle/product/19.3.0/dbhome_1 : ls
oracle@oel7clus1 /u02/app/oracle/product/19.3.0/dbhome_1 : unzip -q /install/LINUX.X64_193000_db_home.zip
oracle@oel7clus1 /u02/app/oracle/product/19.3.0/dbhome_1 : ls
addnode bin css data deinstall dmu env.ora install javavm jlib md nls OPatch ord oui plsql R relnotes root.sh.old.1 sdk sqlj srvm usm xdk
apex clone ctx dbjava demo drdaas has instantclient jdbc ldap mgw odbc opmn ords owm precomp racg root.sh runInstaller slax sqlpatch suptools utl
assistants crs cv dbs diagnostics dv hs inventory jdk lib network olap oracore oss perl QOpatch rdbms root.sh.old schagent.conf sqldeveloper sqlplus ucp wwg
oracle@oel7clus1 /u02/app/oracle/product/19.3.0/dbhome_1 : |
The next step is to start the Oracle 19c Database Installer.
oracle@oel7clus1 /u02/app/oracle/product/19.3.0/dbhome_1 : ./runInstaller
Launching Oracle Database Setup Wizard... |
Step 7: Creating the Database using DBCA
The next step is to create the Single Instance database on the shared storage location - /u02
Now you can use the Graphical Interface of the “Database Configuration Assistant” - DBCA to do this, but in this example I just use the silent option from the command line which is actually really easy to use - since 12c Oracle made some really good changes to the command line option and using these are now super easy to create new databases.
In the command below I will create a database, make sure the datafiles are using Oracle Managed Files (OMF), enable archive log mode (highly recommended!) and I specify the recovery area to also be on the shared storage /u02 location. I am doing this so that in this example my whole database installation and datafiles are all on the shared storage. In this example I am also choosing not to use Multitenant.
dbca -createDatabase \
-silent \
-automaticMemoryManagement false \
-characterSet AL32UTF8 \
-createAsContainerDatabase false \
-datafileDestination /u02/oracle/oradata \
-enableArchive true \
-gdbName DEV \
-nationalCharacterSet AL16UTF16 \
-recoveryAreaDestination /u02/oracle/fast_recovery_area \
-redoLogFileSize 100 \
-storageType fs \
-sysPassword Kiwi123 \
-systemPassword Kiwi123 \
-templateName General_Purpose.dbc \
-totalMemory 850 \
-useOMF true |
When I execute the above, I get the following output:
oracle@oel7clus1 /u02/app/oracle/product/19.3.0/dbhome_1 : . oraenv
ORACLE_SID = [oracle] ? a
ORACLE_HOME = [/home/oracle] ? /u02/app/oracle/product/19.3.0/dbhome_1
The Oracle base has been changed from /u01/app/oracle to /u02/app/oracle
oracle@oel7clus1 /u02/app/oracle/product/19.3.0/dbhome_1 : dbca -createDatabase \
oracle@oel7clus1 --> -silent \
oracle@oel7clus1 --> -automaticMemoryManagement false \
oracle@oel7clus1 --> -characterSet AL32UTF8 \
oracle@oel7clus1 --> -createAsContainerDatabase false \
oracle@oel7clus1 --> -datafileDestination /u02/oracle/oradata \
oracle@oel7clus1 --> -enableArchive true \
oracle@oel7clus1 --> -gdbName DEV \
oracle@oel7clus1 --> -nationalCharacterSet AL16UTF16 \
oracle@oel7clus1 --> -recoveryAreaDestination /u02/oracle/fast_recovery_area \
oracle@oel7clus1 --> -redoLogFileSize 100 \
oracle@oel7clus1 --> -storageType fs \
oracle@oel7clus1 --> -sysPassword Kiwi123 \
oracle@oel7clus1 --> -systemPassword Kiwi123 \
oracle@oel7clus1 --> -templateName General_Purpose.dbc \
oracle@oel7clus1 --> -totalMemory 850 \
oracle@oel7clus1 --> -useOMF true
[WARNING] [DBT-06208] The 'SYS' password entered does not conform to the Oracle recommended standards.
CAUSE:
a. Oracle recommends that the password entered should be at least 8 characters in length, contain at least 1 uppercase character, 1 lower case character and 1 digit [0-9].
b.The password entered is a keyword that Oracle does not recommend to be used as password
ACTION: Specify a strong password. If required refer Oracle documentation for guidelines.
[WARNING] [DBT-06208] The 'SYSTEM' password entered does not conform to the Oracle recommended standards.
CAUSE:
a. Oracle recommends that the password entered should be at least 8 characters in length, contain at least 1 uppercase character, 1 lower case character and 1 digit [0-9].
b.The password entered is a keyword that Oracle does not recommend to be used as password
ACTION: Specify a strong password. If required refer Oracle documentation for guidelines.
Prepare for db operation
10% complete
Registering database with Oracle Restart
14% complete
Copying database files
43% complete
Creating and starting Oracle instance
45% complete
49% complete
53% complete
56% complete
62% complete
Completing Database Creation
68% complete
70% complete
71% complete
Executing Post Configuration Actions
100% complete
Database creation complete. For details check the logfiles at:
/u02/app/oracle/cfgtoollogs/dbca/DEV.
Database Information:
Global Database Name:DEV
System Identifier(SID):DEV
Look at the log file "/u02/app/oracle/cfgtoollogs/dbca/DEV/DEV.log" for further details.
oracle@oel7clus1 /u02/app/oracle/product/19.3.0/dbhome_1 : |
The above process can take a few minutes depending on your hardware, but once done, you have an Oracle Database 19c SE2 running from the Shared Storage on node one in the cluster! You can check this with the srvctl status option as shown below.
oracle@oel7clus1 /u02/app/oracle/product/19.3.0/dbhome_1 : srvctl status database -d DEV -v
Instance DEV is running on node oel7clus1. Instance status: Open. |
Now this does not stop there, there is a lot more to do, but you are almost there.
Next we need to make a few changes to allow for the Active/Passive switch of our services