oracledba.help works great on your mobile device too!

oracledba.help

2-Node RAC (12.2)

TOC

Overview

The following instructions cover the creation of a 2-node RAC for learning and testing in a VirtualBox environment. This has beens tested with RHEL7,CentOS7 and Oracle Linux 7. dnsmasq is used to emulate a DNS server. The values specified for the specs below are the minimum values I found to function OK ¹.
¹In most cases you can reduce the values after install if this is a pure test environment.

Oracle checklists: Hardware | OS | Server Configuration. I wrote this script to do many of the manual changes if you prefer.

Environment Specs

Memory

OS RAMDatabase Instance
Oracle 12.2 = 9216 mbSGA = 1024 mb
Oracle 12.1 = 8192 mbPGA = 256 mb

Storage

VBoxSize (gb)LinuxASMFDASM Group
ASM_GRID52sdbdisk01GRID
ASM_DATA16sdcdisk02DATA
ASM_FRA16sdddisk03FRA
  • GRID
    • Grid Infrastructure Management Repository (GIMR) AKA MGMT.
    • OCR (Oracle Cluster Registry) & Voting Files.
  • DATA: Control, data, redo, parameter, password and temp files.
  • FRA (Fast Recovery Area): Archived logs, control and redo files.

Alternative storage models.

Network (/etc/hosts)

#public:
192.168.56.71 rac01.localdomain	rac01
192.168.56.72 rac02.localdomain	rac02
#192.168.56.73 rac03.localdomain rac03

#private:
192.168.10.1 rac01-priv.localdomain	rac01-priv
192.168.10.2 rac02-priv.localdomain	rac02-priv
#192.168.10.3 rac03-priv.localdomain	rac03-priv

#virtual:
192.168.56.81 rac01-vip.localdomain	rac01-vip
192.168.56.82 rac02-vip.localdomain	rac02-vip
#192.168.56.83 rac03-vip.localdomain	rac03-vip

#Scan:
192.168.56.91 rac-scan.localdomain	rac-scan
192.168.56.92 rac-scan.localdomain	rac-scan
192.168.56.93 rac-scan.localdomain	rac-scan

Procedure

In all the Oracle installation tools if you are using dnsmasq you can ignore any resolv.conf, SCAN and DNS warnings.

VirtualBox: rac01

  1. Create 2 VMs running with above specs. Make sure to create using interfaces for RAC.
  2. Set /etc/hosts to include all RAC network points for this 2-node RAC environment.
  3. Start VM for rac01 and set the hostname to rac01 and restart.
  4. Create OS users and directories.
  5. Make OS changes for the GI.
  6. Configure DNS.
  7. Configure dnsmasq.
  8. Configure NTP
  9. Restart OS and test networking.
    • shutdown -r now
    • Test via nslookup and ping.

Much of the Linux OS prep can be done via this type script.

VirtualBox: rac02

  1. Clone rac01 as rac02.
    • [x] Reinitialize the MAC address of all network cards.
  2. Start VM for rac02 and set the hostname to rac02.
  3. Using the Linux GUI: Applications -> System Tools -> Settings -> Network
    Set the public IP (192.168.56.72) and private (192.168.10.2) IP.
  4. Configure DNS.
  5. Configure dnsmasq.
  6. Delete /u01/app/12.2.0.1/grid. It will be recreated in the GI install. rm -r -f /u01/app/12.2.0.1/grid
  7. Restart OS and test networking.
    • shutdown -r now
    • Test via nslookup, ping and ntp (ntpq -p).

Storage

  1. Shutdown both nodes.
  2. In VirtualBox create shared disk(s) on rac01 and attached them to rac02.
  3. Start both nodes and create disk partitions at the OS level (fdisk) on first node (rac01).
  4. Install and configure ASMFD on first node (rac01).

The fdisk actions will be visible on other nodes if disks shared. You can run lsblk to confirm.

GI Installation

  1. From rac01 perform the GI Installation.
  2. Check the cluster status.

Database

  1. On rac01 run the Database Installation
  2. Create BASH Profile for the oracle user.
  3. Create Database

Check Status


APPENDIX

Configure DNS

Set IP to current system.

  1. chattr -i /etc/resolv.conf
  2. vi /etc/resolv.conf
    Example entry line: nameserver 192.168.56.71
  3. chattr +i /etc/resolv.conf

Configure dnsmasq

Set IP to current system.

  1. cp /etc/dnsmasq.conf /etc/dnsmasq.conf.orig
  2. vi /etc/dnsmasq.conf
expand-hosts
local=/localdomain/
listen-address=127.0.0.1
listen-address=192.168.56.71
bind-interfaces

Make dnsmasq Active

  • systemctl enable dnsmasq.service
  • service dnsmasq restart

Cluster Verification Script

/u01/orasw/grid/runcluvfy.sh stage -pre crsinst -n rac01,rac02 -fixup -verbose

Restart Networking:

  • /etc/init.d/network stop
  • /etc/init.d/network start

Check If All Services Started OK

  • systemctl

Add a Node (rac03)

Configure Base System

  1. Clone\create a VM with the following specs:
    1. Linux OS Prep changes made.
    2. Purge directories (they will be recreated as needed).
         rm -rf oraInventory
         rm -r -f /u01/app/12.2.0.1/grid
      
  2. Update /etc/hosts
    1. Set entries for: public, private and virtual (vip).
    2. Ensure other nodes have updated /etc/hosts.
  3. Start VM for rac03 and set the hostname to rac03.
  4. Using the Linux GUI: Applications -> System Tools -> Settings -> Network
    1. Set the public IP (ex: 192.168.56.73).
    2. Set the private IP (ex: 192.168.10.3).
  5. Configure DNS and\or dnsmasq if used.

Configure Storage

  1. Shutdown all nodes.
  2. In VirtualBox Attach attached shared disk to rac03.
  3. Start rac01 then rac02.
    Ensure cluster and database instances all working OK.
  4. If using UDev, start rac03 and configure Udev.
    • Ensure disks can be seen and correct privs from rac03.
    • ls -rlt /dev/sd?1
  5. Restart rac03

Add Node to Cluster

Run the below from rac01 as the grid user.

  1. Ensure all nodes are started and can see each other OK (ping, nslookup, ntpq -p etc.).
  2. Ensure the RAC for nodes 1 and 2 are working and the database up.
  3. Run addnode.sh
    1. cd /u01/app/12.2.0.1/grid/addnode
      ./addnode.sh "CLUSTER_NEW_NODES={rac03}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={rac03-vip.localdomain}"
    2. GUI Cluster Add Node app will launch...
  4. Set Cluster Add Node Information.
    1. Edit button.
      1. Public Hostname: rac03.localdomain
      2. Node Role: HUB
      3. Virtual Hostname: rac03-vip.localdomain
      4. Confirm the displayed values look OK for the node you are adding.
    2. SSH Connectivity button.
      Notice the [x] Reuse private and public keys option should be selected.
      Enter grid user OS Password then select Setup.
  5. Prerequisite Checks
    Ensure everything looks OK. Correct anything listed that could be an issue.
  6. Summary
    Select Install.
  7. Install Product
    1. Process runs that copies files, updates Cluster Inventory and installs GI on new node...
    2. Run root scripts when prompted.
    3. You can monitor details via: tail -f /u01/app/grid/diag/crs/rac03/crs/trace/alert.log
  8. Finish
    Select Close.

Post GI Steps

  1. Update BASH Profile for grid user.
  2. Run $ORACLE_HOME/OPatch/opatch lsinventory and validate that all nodes are shown and the new node has the same patch version.
  3. Check status of cluster.

Add Instance to Database

Run addnode.sh

  1. Log into rac01 as the oracle user.
  2. cd $ORACLE_HOME/addnode
  3. ./addnode.sh "CLUSTER_NEW_NODES={rac03}"
  4. Node Selection: [x] rac03
    Setup SSH connectivity for oracle user and rac03.
  5. Prerequisites Checks
    Resolve any issues identified then continue.
  6. Summary
    If all looks good press Install button.
  7. Install Product
    Process runs... You will be prompted to run root script on rac03.

Set .bashrc
Set BASH profile.

Run dbca

  1. From rac01 as the oracle user.
  2. cd $ORACLE_HOME
  3. dbca &
  4. Database Operation: (x) oracle RAC database Instance Management
  5. Instance Operation: (x) Add an instance
  6. Select Database:
    (x) oradb (Local instance: oradb1...)
    Username: sys
    Password: ********
  7. Instance Details:
    Instance name: oradb3
    Node name: rac03
  8. Summary: Finish
  9. Progress Page
    Initially blank...then shows progress.
    You can tail the Alert log:
    Ex: /u01/app/oracle/diag/rdbms/oradb/oradb3/trace/alert_oradb3.log
  10. Finish
    Should show: Instance "oradb3" added successfully on "rac03".

Delete a Node

  1. Run dbca to Delete Instance.
  2. Delete Cluster Entries

Ensure Node to Delete is Unpinned

Run from rac01 as grid.

 olsnodes -s -t
 ...
 rac03 Active Unpinned

If the node is pinned, then run the crsctl unpin css commands.

Deinstall the Oracle Clusterware Home

From rac03 grid user.

 cd /tmp 
 $ORACLE_HOME/deinstall/deinstall -local
  • You may be prompted to run scripts as root user during the process.
  • Caution: If you do not specify the -local flag, then the command removes the Oracle Grid Infrastructure home from every node in the cluster!

Delete Node

From Node 1 (rac01) as root.

 export ORACLE_HOME=/u01/app/12.2.0.1/grid
 export GRID_HOME=/u01/app/12.2.0.1/grid
 cd $GRID_HOME/bin
 ./crsctl delete node -n rac03

Verify Deleted OK.
From Node 1 (rac01) as grid user.
cluvfy stage -post nodedel -n rac03 -verbose

Ensure VIP Cleared

From Node 1 (rac01) as root.

 export ORACLE_HOME=/u01/app/12.2.0.1/grid
 export GRID_HOME=/u01/app/12.2.0.1/grid
 cd $GRID_HOME/bin
 ./srvctl config vip -node rac03

If the VIP still exists, then delete it:

 ./srvctl stop vip -node rac03
 ./srvctl remove vip -vip rac03 -f

Post Removal Tasks

  • Verify Cluster Nodes using: olsnodes.
  • Remove rac03 entries from /etc/hosts.
  • Update DNS (bounce dnsmasq if used).

Notes

I noticed after removing a node that in starting the database instances they came up slow but ok the very first time after I did this (there were no errors in alog). I think ASM might have been busy...TBD.

Oracle's docs on deleting a node. Also see: Doc ID 1377349.1


Installation Time Benchmarks

• Initial GI:              20 min.
• 2nd root script (node1): 10 min.
• 2nd root script (node2):  5 min.
• Remainder (Upd Inv, OraNet, ASM, GIMR...): 35 min.

Total time to install GI with 2 nodes: 1.10 hours.

Alternative Storage Models (minimum values)

This model is useful if you have inferior hardware:

VBoxSize (gb)LinuxASMFDASM Group
ASM_GRID12sdbdisk01GRID
ASM_MGMT40sdcdisk02MGMT
ASM_DATA16sdddisk03DATA
ASM_FRA16sdedisk04FRA

• MGMT: Grid Infrastructure Management Repository (GIMR).
• GRID: OCR (Oracle Cluster Registry) & Voting Files.

This can be useful to test Oracle patches and the like.

VBoxSize (gb)LinuxASMFDASM Group
ASM_GRID75sdbdisk01GRID
  • Use the GRID ASM disk group for everything (in place of DATA and FRA etc.).
  • I think the GRID group may be able to be created with as little as 55gb but I need to confirm this (asmcmd du shows 48gb used after DB creation).