oracledba.help

2-Node RAC

<- SpecialTopics

TOC

Overview

The following instructions cover the creation of a 2-node RAC for learning and testing in a VirtualBox environment using Oracle Linux 7. dnsmasq is used to emulate a DNS server. The values specified for the specs below are the minimum values I found to function OK.

Oracle checklists: Hardware | OS | Server Configuration.

Environment Specs

Memory

OS RAMDatabase Instance
Oracle 12.2 = 9216 mbSGA = 1512 mb
Oracle 12.1 = 8192 mbPGA = 256 mb

In many cases you can reduce the values after install if this is a pure test environment.

Storage

VBoxSize (gb)LinuxASMFDASM Group
ASM_GRID52sdbdisk01GRID
ASM_DATA16sdcdisk02DATA
ASM_FRA16sdddisk03FRA
  • GRID
    • Grid Infrastructure Management Repository (GIMR) AKA MGMT.
    • OCR (Oracle Cluster Registry) & Voting Files.
  • DATA: Control, data, redo, parameter, password and temp files.
  • FRA (Fast Recovery Area): Archived logs, control and redo files.

Alternative storage models.

Network (/etc/hosts)

# public
192.168.56.71 lnx01.local lnx01
192.168.56.72 lnx02.local lnx02
#192.168.56.73 lnx03.local lnx03

# private
192.168.10.1 lnx01-priv.local lnx01-priv
192.168.10.2 lnx02-priv.local lnx02-priv
#192.168.10.3 lnx03-priv.local lnx03-priv

# virtual
192.168.56.81 lnx01-vip.local lnx01-vip
192.168.56.82 lnx02-vip.local lnx02-vip
#192.168.56.83 lnx03-vip.local lnx03-vip

# SCAN
192.168.56.91 rac-scan.local rac-scan
192.168.56.92 rac-scan.local rac-scan
192.168.56.93 rac-scan.local rac-scan

Procedure

In all the Oracle installation tools if you are using dnsmasq you can ignore any resolv.conf, SCAN and DNS warnings.

VirtualBox: lnx01

  1. Create Linux VM making sure to create with interfaces for RAC.
  2. Set /etc/hosts to include all RAC network points for this 2-node RAC environment.
  3. Set the hostname to lnx01.
  4. Prep node 1 for Oracle.
  5. Configure DNS and dnsmasq.
  6. Configure NTP
  7. Restart OS and test networking.
    • shutdown -r now
    • Test DNS\dnsmasq via nslookup and ping.

VirtualBox: lnx02

  1. Clone lnx01 as lnx02.
    • [x] Reinitialize the MAC address of all network cards.
  2. Start VM for lnx02 and set the hostname to lnx02.
  3. Using the Linux GUI: Applications -> System Tools -> Settings -> Network
    Set the public IP (192.168.56.72) and private (192.168.10.2) IP.
  4. Configure DNS and dnsmasq.
  5. Delete grid app dir. It will be recreated in the GI install.
    Examples:
    rm -rf /u01/app/12.1.0.2
    rm -r -f /u01/app/grid
  6. Restart OS and test networking.
    • shutdown -r now
    • Test via nslookup, ping and ntp (ntpq -p).

Storage

  1. Shutdown both nodes.
  2. In VirtualBox create shared disk(s) on lnx01 and attached them to lnx02.
  3. Start both nodes and create disk partitions at the OS level (fdisk) on first node (lnx01).
  4. Configure disk driver:
    • For 12.2 use ASMFD on first node (lnx01).
    • For 12.1 use UDEV to configure disks on all nodes.

The fdisk actions will be visible on other nodes if disks shared. You can run lsblk to confirm.

GI Installation

  1. From lnx01 perform the GI Installation: 12.2 | 12.1
  2. Check the cluster status.

Database

  1. On lnx01 run the Database Installation: 12.2 | 12.1
  2. Create BASH Profile for the oracle user.
  3. Create Database: 12.2 | 12.1

Check Status


APPENDIX

Configure DNS

Set IP to current system.

  1. chattr -i /etc/resolv.conf
  2. vi /etc/resolv.conf
    Example entry line: nameserver 192.168.56.71
  3. chattr +i /etc/resolv.conf

Configure dnsmasq

Set IP to current system.

  1. cp /etc/dnsmasq.conf /etc/dnsmasq.conf.orig
  2. vi /etc/dnsmasq.conf
expand-hosts
local=/localdomain/
listen-address=127.0.0.1
listen-address=192.168.56.71
bind-interfaces

Make dnsmasq Active

  • systemctl enable dnsmasq.service
  • service dnsmasq restart

Cluster Verification Script

/u01/orasw/grid/runcluvfy.sh stage -pre crsinst -n lnx01,lnx02 -fixup -verbose

Restart Networking:

  • /etc/init.d/network stop
  • /etc/init.d/network start

Check If All Services Started OK

  • systemctl

Add a Node (lnx03)

Configure Base System

  1. Clone\create a VM with the following specs:
    1. Linux OS Prep changes made.
    2. Purge directories (they will be recreated as needed).
         rm -rf oraInventory
         rm -r -f /u01/app/12.2.0.1/grid
      
  2. Update /etc/hosts
    1. Set entries for: public, private and virtual (vip).
    2. Ensure other nodes have updated /etc/hosts.
  3. Start VM for lnx03 and set the hostname to lnx03.
  4. Using the Linux GUI: Applications -> System Tools -> Settings -> Network
    1. Set the public IP (ex: 192.168.56.73).
    2. Set the private IP (ex: 192.168.10.3).
  5. Configure DNS and\or dnsmasq if used.

Configure Storage

  1. Shutdown all nodes.
  2. In VirtualBox Attach attached shared disk to lnx03.
  3. Start lnx01 then lnx02.
    Ensure cluster and database instances all working OK.
  4. If using UDev, start lnx03 and configure Udev.
    • Ensure disks can be seen and correct privs from lnx03.
    • ls -rlt /dev/sd?1
  5. Restart lnx03

Add Node to Cluster

Run the below from lnx01 as the grid user.

  1. Ensure all nodes are started and can see each other OK (ping, nslookup, ntpq -p etc.).
  2. Ensure the RAC for nodes 1 and 2 are working and the database up.
  3. Run addnode.sh
    1. cd /u01/app/12.2.0.1/grid/addnode
      ./addnode.sh "CLUSTER_NEW_NODES={lnx03}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={lnx03-vip.localdomain}"
    2. GUI Cluster Add Node app will launch...
  4. Set Cluster Add Node Information.
    1. Edit button.
      1. Public Hostname: lnx03.localdomain
      2. Node Role: HUB
      3. Virtual Hostname: lnx03-vip.localdomain
      4. Confirm the displayed values look OK for the node you are adding.
    2. SSH Connectivity button.
      Notice the [x] Reuse private and public keys option should be selected.
      Enter grid user OS Password then select Setup.
  5. Prerequisite Checks
    Ensure everything looks OK. Correct anything listed that could be an issue.
  6. Summary
    Select Install.
  7. Install Product
    1. Process runs that copies files, updates Cluster Inventory and installs GI on new node...
    2. Run root scripts when prompted.
    3. You can monitor details via: tail -f /u01/app/grid/diag/crs/lnx03/crs/trace/alert.log
  8. Finish
    Select Close.

Post GI Steps

  1. Update BASH Profile for grid user.
  2. Run $ORACLE_HOME/OPatch/opatch lsinventory and validate that all nodes are shown and the new node has the same patch version.
  3. Check status of cluster.

Add Instance to Database

Run addnode.sh

  1. Log into lnx01 as the oracle user.
  2. cd $ORACLE_HOME/addnode
  3. ./addnode.sh "CLUSTER_NEW_NODES={lnx03}"
  4. Node Selection: [x] lnx03
    Setup SSH connectivity for oracle user and lnx03.
  5. Prerequisites Checks
    Resolve any issues identified then continue.
  6. Summary
    If all looks good press Install button.
  7. Install Product
    Process runs... You will be prompted to run root script on lnx03.

Set .bashrc
Set BASH profile.

Run dbca

  1. From lnx01 as the oracle user.
  2. cd $ORACLE_HOME
  3. dbca &
  4. Database Operation: (x) oracle RAC database Instance Management
  5. Instance Operation: (x) Add an instance
  6. Select Database:
    (x) oradb (Local instance: oradb1...)
    Username: sys
    Password: ********
  7. Instance Details:
    Instance name: oradb3
    Node name: lnx03
  8. Summary: Finish
  9. Progress Page
    Initially blank...then shows progress.
    You can tail the Alert log:
    Ex: /u01/app/oracle/diag/rdbms/oradb/oradb3/trace/alert_oradb3.log
  10. Finish
    Should show: Instance "oradb3" added successfully on "lnx03".

Delete a Node

  1. Run dbca to Delete Instance.
  2. Delete Cluster Entries

Ensure Node to Delete is Unpinned

Run from lnx01 as grid.

 olsnodes -s -t
 ...
 lnx03Active Unpinned

If the node is pinned, then run the crsctl unpin css commands.

Deinstall the Oracle Clusterware Home

From lnx03 grid user.

 cd /tmp 
 $ORACLE_HOME/deinstall/deinstall -local
  • You may be prompted to run scripts as root user during the process.
  • Caution: If you do not specify the -local flag, then the command removes the Oracle Grid Infrastructure home from every node in the cluster!

Delete Node

From Node 1 (lnx01) as root.

 export ORACLE_HOME=/u01/app/12.2.0.1/grid
 export GRID_HOME=/u01/app/12.2.0.1/grid
 cd $GRID_HOME/bin
 ./crsctl delete node -n lnx03

Verify Deleted OK.
From Node 1 (lnx01) as grid user.
cluvfy stage -post nodedel -n lnx03 -verbose

Ensure VIP Cleared

From Node 1 (lnx02) as root.

 export ORACLE_HOME=/u01/app/12.2.0.1/grid
 export GRID_HOME=/u01/app/12.2.0.1/grid
 cd $GRID_HOME/bin
 ./srvctl config vip -node lnx03

If the VIP still exists, then delete it:

 ./srvctl stop vip -node lnx03
 ./srvctl remove vip -vip lnx03 -f

Post Removal Tasks

  • Verify Cluster Nodes using: olsnodes.
  • Remove lnx03 entries from /etc/hosts.
  • Update DNS (bounce dnsmasq if used).

Notes

I noticed after removing a node that in starting the database instances they came up slow but ok the very first time after I did this (there were no errors in alog). I think ASM might have been busy...TBD.

Oracle's docs on deleting a node. Also see: Doc ID 1377349.1


Installation Time Benchmarks

• Initial GI:              20 min.
• 2nd root script (node1): 10 min.
• 2nd root script (node2):  5 min.
• Remainder (Upd Inv, OraNet, ASM, GIMR...): 35 min.

Total time to install GI with 2 nodes: 1.10 hours.

Alternative Storage Models (minimum values)

This model is useful if you have inferior hardware:

VBoxSize (gb)LinuxASMFDASM Group
ASM_GRID12sdbdisk01GRID
ASM_MGMT40sdcdisk02MGMT
ASM_DATA16sdddisk03DATA
ASM_FRA16sdedisk04FRA

• MGMT: Grid Infrastructure Management Repository (GIMR).
• GRID: OCR (Oracle Cluster Registry) & Voting Files.

This can be useful to test Oracle patches and the like.

VBoxSize (gb)LinuxASMFDASM Group
ASM_GRID55sdbdisk01GRID
  • Use the GRID ASM disk group for everything (in place of DATA and FRA etc.).
  • Reminder, 55gb is the minimum value.

Hybrid

VBoxSize (gb)LinuxASMFDASM Group
ASM_GRID55sdbdisk01GRID
ASM_DATA35sdcdisk02DATA

In this model use DATA for both DATA and FRA type data.

Post initLnxForOra.sh Steps

Assuming initLnxForOra.sh implemented.

Next (depending on role of system)
RAC
  - Clone for next node
    -- Change hostname (lnx02).
    -- Change Public and Private IPs.
    -- Edit resolv.conf and dnsmasq.
  - Configure disks for RAC

Misc
- Install GI
- Create DB