oracledba.help
SpecialTopics

2-Node RAC

Overview

The following instructions cover the creation of a 2-node RAC for learning and testing in a VirtualBox environment using Oracle Linux 7\8.

  • dnsmasq is used to emulate a DNS server.
  • The values specified below are for a testing environment.

Environment Specs

Memory

OS RAMDB Instance (SGA\PGA)
Oracle 21.n\19.n = 9728 mb1536\512 mb
Oracle 12.1 = 8192 mb1512\256 mb

Storage

VBoxSize (gb)LinuxASMFDASM Group
ASM_GRID52sdbdisk01GRID
ASM_DATA16sdcdisk02DATA
ASM_FRA16sdddisk03FRA

Alternative storage models.

ASM Groups

  • GRID
    • Grid Infrastructure Management Repository (GIMR) AKA MGMT.
    • OCR (Oracle Cluster Registry) & Voting Files.
  • DATA: Control, data, redo, parameter, password and temp files.
  • FRA (Fast Recovery Area): Archived logs, control and redo files.

Network (/etc/hosts)

# public
192.168.56.71 lnx01.local lnx01
192.168.56.72 lnx02.local lnx02
#192.168.56.73 lnx03.local lnx03

# private
192.168.10.1 lnx01-priv.local lnx01-priv
192.168.10.2 lnx02-priv.local lnx02-priv
#192.168.10.3 lnx03-priv.local lnx03-priv

# virtual
192.168.56.81 lnx01-vip.local lnx01-vip
192.168.56.82 lnx02-vip.local lnx02-vip
#192.168.56.83 lnx03-vip.local lnx03-vip

# SCAN
192.168.56.91 scan-alfa.local scan-alfa
192.168.56.92 scan-alfa.local scan-alfa
192.168.56.93 scan-alfa.local scan-alfa

# Other
192.168.56.75 lnxsb.local lnxsb

Virtual Box Interface Template

 Int MAC          Name    Net Cfg
 1   080027428D98 enp0s3  Automatic (DHCP)
 2   08002751B0C9 enp0s8  Automatic (DHCP)
 3   080027B5C154 enp0s9  Manual ->  Select [Add]: 192.168.56.71\24
 4   080027EB6BDB enp0s10 Manual ->  Select [Add]: 192.168.10.1\24

Your MAC addresses may differ.

Procedure

In all the Oracle installation tools if you are using dnsmasq you can ignore any resolv.conf, SCAN and DNS warnings.

VirtualBox: lnx01

  1. Create Linux VM making sure to create with interfaces for RAC.
  2. Set /etc/hosts to include all RAC network points for this 2-node RAC environment.
  3. Set the hostname to lnx01.
  4. Prep node 1 for Oracle.
  5. Configure DNS and\or dnsmasq.
  6. Configure chronyd\NTP
  7. Restart OS and test networking.
    • shutdown -r now
    • Test DNS\dnsmasq via nslookup and ping.

VirtualBox: lnx02

  1. Clone lnx01 as lnx02.
    Set to create a new MAC address for RAC interfaces (pub and priv).
  2. Start VM for lnx02 and set the hostname to lnx02.
  3. Using the Linux GUI: Applications 🠊 System Tools 🠊 Settings 🠊 Network
    Set the public IP (192.168.56.72) and private (192.168.10.2) IP.
  4. Configure DNS and\or dnsmasq.
  5. Restart OS and test networking.
    • shutdown -r now
    • Test via nslookup, ping and ntp (ntpq -p).

Storage

  1. Create VirtualBox Disks:
    • Shutdown both nodes.
    • Create shared disks on lnx01: ASM_GRID, ASM_DATA and ASM_FRA.
  2. Create Linux Disk Partitions:
    One disk partition spanning the entire disk is created for each disk.
    • Start both nodes.
    • Create\fdisk disk partitions at the OS level on first node (lnx01).
  3. Set Disk Driver\Bindings:
    • Using UDEV.
    • Alternatively, for 12.2 and later you can use ASMFD.

GI Installation

  1. From lnx01 perform the GI Installation: 19.3 | 12.1
  2. Check the cluster status.

Database

  1. On lnx01 run the Database Product Installation: 19.3 | 12.1
  2. Create Database: 19c | 12.1
  3. Set ORACLE_SID in each nodes .bashrc to match instance name. Example: ORACLE_SID=oradb1

If creating on laptop disabling Management Options (CVU & Database Express) will improve creation time.

Check Status


APPENDIX

Configure DNS

Set IP to current system.

  1. chattr -i /etc/resolv.conf
  2. vi /etc/resolv.conf
    Example entry line: nameserver 192.168.56.71
  3. chattr +i /etc/resolv.conf

Configure dnsmasq

Set IP to current system.

  1. cp /etc/dnsmasq.conf /etc/dnsmasq.conf.orig
  2. vi /etc/dnsmasq.conf
expand-hosts
local=/localdomain/
listen-address=127.0.0.1
listen-address=192.168.56.71
bind-interfaces

Make dnsmasq Active

  • systemctl enable dnsmasq.service
  • service dnsmasq restart

Cluster Verification Script

/u01/orasw/grid/runcluvfy.sh stage -pre crsinst -n lnx01,lnx02 -fixup -verbose

Restart Networking:

  • /etc/init.d/network stop
  • /etc/init.d/network start

Check If All Services Started OK

  • systemctl

Add a Node (lnx03)

Configure Base System

  1. Clone\create a VM with the following specs:
    1. Linux OS Prep? changes made.
    2. Purge directories (they will be recreated as needed).
         rm -rf oraInventory
         rm -r -f /u01/app/12.2.0.1/grid
      
  2. Update /etc/hosts
    1. Set entries for: public, private and virtual (vip).
    2. Ensure other nodes have updated /etc/hosts.
  3. Start VM for lnx03 and set the hostname to lnx03.
  4. Using the Linux GUI: Applications -> System Tools -> Settings -> Network
    1. Set the public IP (ex: 192.168.56.73).
    2. Set the private IP (ex: 192.168.10.3).
  5. Configure DNS and\or dnsmasq if used.

Configure Storage

  1. Shutdown all nodes.
  2. In VirtualBox Attach? attached shared disk to lnx03.
  3. Start lnx01 then lnx02.
    Ensure cluster and database instances all working OK.
  4. If using UDev, start lnx03 and configure Udev.
    • Ensure disks can be seen and correct privs from lnx03.
    • ls -rlt /dev/sd?1
  5. Restart lnx03

Add Node to Cluster

Run the below from lnx01 as the grid user.

  1. Ensure all nodes are started and can see each other OK (ping, nslookup, ntpq -p etc.).
  2. Ensure the RAC for nodes 1 and 2 are working and the database up.
  3. Run addnode.sh
    1. cd /u01/app/12.2.0.1/grid/addnode
      ./addnode.sh "CLUSTER_NEW_NODES={lnx03}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={lnx03-vip.localdomain}"
    2. GUI Cluster Add Node app will launch...
  4. Set Cluster Add Node Information.
    1. Edit button.
      1. Public Hostname: lnx03.localdomain
      2. Node Role: HUB
      3. Virtual Hostname: lnx03-vip.localdomain
      4. Confirm the displayed values look OK for the node you are adding.
    2. SSH Connectivity button.
      Notice the [x] Reuse private and public keys option should be selected.
      Enter grid user OS Password then select Setup.
  5. Prerequisite Checks
    Ensure everything looks OK. Correct anything listed that could be an issue.
  6. Summary
    Select Install.
  7. Install Product
    1. Process runs that copies files, updates Cluster Inventory and installs GI on new node...
    2. Run root scripts when prompted.
    3. You can monitor details via: tail -f /u01/app/grid/diag/crs/lnx03/crs/trace/alert.log
  8. Finish
    Select Close.

Post GI Steps

  1. Update BASH Profile for grid user.
  2. Run $ORACLE_HOME/OPatch/opatch lsinventory and validate that all nodes are shown and the new node has the same patch version.
  3. Check status of cluster.

Add Instance to Database

Run addnode.sh

  1. Log into lnx01 as the oracle user.
  2. cd $ORACLE_HOME/addnode
  3. ./addnode.sh "CLUSTER_NEW_NODES={lnx03}"
  4. Node Selection: [x] lnx03
    Setup SSH connectivity for oracle user and lnx03.
  5. Prerequisites Checks
    Resolve any issues identified then continue.
  6. Summary
    If all looks good press Install button.
  7. Install Product
    Process runs... You will be prompted to run root script on lnx03.

Set .bashrc
Set BASH profile?.

Run dbca

  1. From lnx01 as the oracle user.
  2. cd $ORACLE_HOME
  3. dbca &
  4. Database Operation: (x) oracle RAC database Instance Management
  5. Instance Operation: (x) Add an instance
  6. Select Database:
    (x) oradb (Local instance: oradb1...)
    Username: sys
    Password: ********
  7. Instance Details:
    Instance name: oradb3
    Node name: lnx03
  8. Summary: Finish
  9. Progress Page
    Initially blank...then shows progress.
    You can tail the Alert log:
    Ex: /u01/app/oracle/diag/rdbms/oradb/oradb3/trace/alert_oradb3.log
  10. Finish
    Should show: Instance "oradb3" added successfully on "lnx03".

Delete a Node

  1. Run dbca to Delete Instance.
  2. Delete Cluster Entries

Ensure Node to Delete is Unpinned

Run from lnx01 as grid.

 olsnodes -s -t
 ...
 lnx03Active Unpinned

If the node is pinned, then run the crsctl unpin css commands.

Deinstall the Oracle Clusterware Home

From lnx03 grid user.

 cd /tmp 
 $ORACLE_HOME/deinstall/deinstall -local
  • You may be prompted to run scripts as root user during the process.
  • Caution: If you do not specify the -local flag, then the command removes the Oracle Grid Infrastructure home from every node in the cluster!

Delete Node

From Node 1 (lnx01) as root.

 export ORACLE_HOME=/u01/app/12.2.0.1/grid
 export GRID_HOME=/u01/app/12.2.0.1/grid
 cd $GRID_HOME/bin
 ./crsctl delete node -n lnx03

Verify Deleted OK.
From Node 1 (lnx01) as grid user.
cluvfy stage -post nodedel -n lnx03 -verbose

Ensure VIP Cleared

From Node 1 (lnx02) as root.

 export ORACLE_HOME=/u01/app/12.2.0.1/grid
 export GRID_HOME=/u01/app/12.2.0.1/grid
 cd $GRID_HOME/bin
 ./srvctl config vip -node lnx03

If the VIP still exists, then delete it:

 ./srvctl stop vip -node lnx03
 ./srvctl remove vip -vip lnx03 -f

Post Removal Tasks

  • Verify Cluster Nodes using: olsnodes.
  • Remove lnx03 entries from /etc/hosts.
  • Update DNS (bounce dnsmasq if used).

Notes

I noticed after removing a node that in starting the database instances they came up slow but ok the very first time after I did this (there were no errors in alog). I think ASM might have been busy...TBD.

Oracle's docs on deleting a node. Also see: Doc ID 1377349.1


Installation Time Benchmarks

• Initial GI:              20 min.
• 2nd root script (node1): 10 min.
• 2nd root script (node2):  5 min.
• Remainder (Upd Inv, OraNet, ASM, GIMR...): 35 min.

Total time to install GI with 2 nodes: 1.10 hours.

Alternative Storage Models (minimum values)

Older Hardware

This model is useful if you have inferior hardware or cannot use the External disk option:

VBoxSize (gb)LinuxASMFDASM Group
ASM_GRID12sdbdisk01GRID
ASM_MGMT40sdcdisk02MGMT
ASM_DATA16sdddisk03DATA
ASM_FRA16sdedisk04FRA

• MGMT: Grid Infrastructure Management Repository (GIMR).
• GRID: OCR (Oracle Cluster Registry) & Voting Files.

Mini

This can be useful to test Oracle patches and the like.

VBoxSize (gb)LinuxASMFDASM Group
ASM_GRID50sdbdisk01GRID
  • Use the GRID ASM disk group for everything (in place of DATA and FRA etc.).
  • 12c ASM findings: GI uses 6gb initially. After base DB created a total of 20gb used.
  • Using Defraggler, I found disk performance for a test RAC good. DBCA DB creation took 30 min.

RAC Resources

Oracle checklists: Hardware | OS | Server Configuration.

Common Errors

GI Install Error

[INS-06003] Failed to setup passwordless SSH connectivity with the following node(s): [lnx02, lnx01]

Solutions

Option 1: Bypass DNS

  1. Validate If SSH is Slow.
     lnx01] grid> ssh lnx02

  2. If password prompt takes longer than 10 seconds bypass DNS on all nodes
     as shown below:

    vi /etc/ssh/sshd_config and Add
      AddressFamily inet  
      UseDNS no

    service sshd restart

Option 2: Update openssh

 yum install openssh -y

Adapters: VBox -> Linux (example)

Adpater 3, Internal Net, pubnet
enp0s9: inet 192.168.56.71  netmask 255.255.255.0
        ether 08:00:27:1c:b3:cd

Adpater 4, Internal Net, privnet
enp0s10: inet 192.168.10.1  netmask 255.255.255.0
         ether 08:00:27:3a:a6:5a

Clear Removed Disk Session

Open DOS Console As Administrator.

cd C:\Program Files\Oracle\VirtualBox
vboxmanage list hdds > disks.txt
notepad disks.txt

Get UUID From Above
vboxmanage closemedium disk <uuid> --delete
Ex: vboxmanage closemedium disk 45522cd9-48ad-4656-9ce4-d3b44ed1ebd6 --delete

Template: Interfaces and Udev

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
1 NAT               endp0s3  MAC_Here
2 Host-Only         endp0s8  MAC_Here
3 Internal: pubnet  endp0s9  MAC_Here
4 Internal: privnet endp0s10 MAC_Here
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
sdb  Device_ID_Here
sdc  Device_ID_Here
sdd  Device_ID_Here
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -