2-Node RAC
Related Topics
Process
- VirtualBox: lnx01
- VirtualBox: lnx02
- Storage
- Grid Infrastructure (GI)
- Database
- Apply the latest patches.
Overview
The following instructions cover the creation of a 2-node RAC for learning and testing in a VirtualBox environment using Oracle Linux 7\8.
- dnsmasq is used to emulate a DNS server.
- The values specified below are for a testing environment.
Environment Specs
Memory
OS RAM | DB Instance (SGA\PGA) |
---|---|
Oracle 21.n\19.n = 9728 mb | 1536\512 mb |
Oracle 12.1 = 8192 mb | 1512\256 mb |
Storage
VBox | Size (gb) | Linux | ASMFD | ASM Group |
---|---|---|---|---|
ASM_GRID | 52 | sdb | disk01 | GRID |
ASM_DATA | 16 | sdc | disk02 | DATA |
ASM_FRA | 16 | sdd | disk03 | FRA |
ASM Groups
- GRID
- Grid Infrastructure Management Repository (GIMR) AKA MGMT.
- OCR (Oracle Cluster Registry) & Voting Files.
- DATA: Control, data, redo, parameter, password and temp files.
- FRA (Fast Recovery Area): Archived logs, control and redo files.
# public 192.168.56.71 lnx01.local lnx01 192.168.56.72 lnx02.local lnx02 #192.168.56.73 lnx03.local lnx03 # private 192.168.10.1 lnx01-priv.local lnx01-priv 192.168.10.2 lnx02-priv.local lnx02-priv #192.168.10.3 lnx03-priv.local lnx03-priv # virtual 192.168.56.81 lnx01-vip.local lnx01-vip 192.168.56.82 lnx02-vip.local lnx02-vip #192.168.56.83 lnx03-vip.local lnx03-vip # SCAN 192.168.56.91 scan-alfa.local scan-alfa 192.168.56.92 scan-alfa.local scan-alfa 192.168.56.93 scan-alfa.local scan-alfa # Other 192.168.56.75 lnxsb.local lnxsb
Virtual Box Interface Template
Int MAC Name Net Cfg 1 080027428D98 enp0s3 Automatic (DHCP) 2 08002751B0C9 enp0s8 Automatic (DHCP) 3 080027B5C154 enp0s9 Manual -> Select [Add]: 192.168.56.71\24 4 080027EB6BDB enp0s10 Manual -> Select [Add]: 192.168.10.1\24
Your MAC addresses may differ.
Procedure
In all the Oracle installation tools if you are using dnsmasq you can ignore any resolv.conf, SCAN and DNS warnings.
VirtualBox: lnx01
- Create Linux VM making sure to create with interfaces for RAC.
- Set /etc/hosts to include all RAC network points for this 2-node RAC environment.
- Set the hostname to lnx01.
- Prep node 1 for Oracle.
- Configure DNS and\or dnsmasq.
- Configure chronyd\NTP
- Restart OS and test networking.
- shutdown -r now
- Test DNS\dnsmasq via nslookup and ping.
VirtualBox: lnx02
- Clone lnx01 as lnx02.
Set to create a new MAC address for RAC interfaces (pub and priv). - Start VM for lnx02 and set the hostname to lnx02.
- Using the Linux GUI: Applications 🠊 System Tools 🠊 Settings 🠊 Network
Set the public IP (192.168.56.72) and private (192.168.10.2) IP. - Configure DNS and\or dnsmasq.
- Restart OS and test networking.
- shutdown -r now
- Test via nslookup, ping and ntp (ntpq -p).
Storage
- Create VirtualBox Disks:
- Shutdown both nodes.
- Create shared disks on lnx01: ASM_GRID, ASM_DATA and ASM_FRA.
- Create Linux Disk Partitions:
One disk partition spanning the entire disk is created for each disk.- Start both nodes.
- Create\fdisk disk partitions at the OS level on first node (lnx01).
- Set Disk Driver\Bindings:
GI Installation
Database
- On lnx01 run the Database Product Installation: 19.3 | 12.1
- Create Database: 19c | 12.1
- Set ORACLE_SID in each nodes .bashrc to match instance name. Example: ORACLE_SID=oradb1
If creating on laptop disabling Management Options (CVU & Database Express) will improve creation time.
Check Status
APPENDIX
Configure DNS
Set IP to current system.
- chattr -i /etc/resolv.conf
- vi /etc/resolv.conf
Example entry line: nameserver 192.168.56.71 - chattr +i /etc/resolv.conf
Configure dnsmasq
Set IP to current system.
- cp /etc/dnsmasq.conf /etc/dnsmasq.conf.orig
- vi /etc/dnsmasq.conf
expand-hosts local=/localdomain/ listen-address=127.0.0.1 listen-address=192.168.56.71 bind-interfaces
Make dnsmasq Active
- systemctl enable dnsmasq.service
- service dnsmasq restart
Cluster Verification Script
/u01/orasw/grid/runcluvfy.sh stage -pre crsinst -n lnx01,lnx02 -fixup -verbose
Restart Networking:
- /etc/init.d/network stop
- /etc/init.d/network start
Check If All Services Started OK
- systemctl
Add a Node (lnx03)
Configure Base System
- Clone\create a VM with the following specs:
- Linux OS Prep? changes made.
- Purge directories (they will be recreated as needed).
rm -rf oraInventory rm -r -f /u01/app/12.2.0.1/grid
- Update /etc/hosts
- Set entries for: public, private and virtual (vip).
- Ensure other nodes have updated /etc/hosts.
- Start VM for lnx03 and set the hostname to lnx03.
- Using the Linux GUI: Applications -> System Tools -> Settings -> Network
- Set the public IP (ex: 192.168.56.73).
- Set the private IP (ex: 192.168.10.3).
- Configure DNS and\or dnsmasq if used.
Configure Storage
- Shutdown all nodes.
- In VirtualBox Attach? attached shared disk to lnx03.
- Start lnx01 then lnx02.
Ensure cluster and database instances all working OK. - If using UDev, start lnx03 and configure Udev.
- Ensure disks can be seen and correct privs from lnx03.
- ls -rlt /dev/sd?1
- Restart lnx03
Add Node to Cluster
Run the below from lnx01 as the grid user.
- Ensure all nodes are started and can see each other OK (ping, nslookup, ntpq -p etc.).
- Ensure the RAC for nodes 1 and 2 are working and the database up.
- Run addnode.sh
- cd /u01/app/12.2.0.1/grid/addnode
./addnode.sh "CLUSTER_NEW_NODES={lnx03}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={lnx03-vip.localdomain}"
- GUI Cluster Add Node app will launch...
- cd /u01/app/12.2.0.1/grid/addnode
- Set Cluster Add Node Information.
- Edit button.
- Public Hostname: lnx03.localdomain
- Node Role: HUB
- Virtual Hostname: lnx03-vip.localdomain
- Confirm the displayed values look OK for the node you are adding.
- SSH Connectivity button.
Notice the [x] Reuse private and public keys option should be selected.
Enter grid user OS Password then select Setup.
- Edit button.
- Prerequisite Checks
Ensure everything looks OK. Correct anything listed that could be an issue. - Summary
Select Install. - Install Product
- Process runs that copies files, updates Cluster Inventory and installs GI on new node...
- Run root scripts when prompted.
- You can monitor details via:
tail -f /u01/app/grid/diag/crs/lnx03/crs/trace/alert.log
- Finish
Select Close.
Post GI Steps
- Update BASH Profile for grid user.
- Run $ORACLE_HOME/OPatch/opatch lsinventory and validate that all nodes are shown and the new node has the same patch version.
- Check status of cluster.
Add Instance to Database
Run addnode.sh
- Log into lnx01 as the oracle user.
- cd $ORACLE_HOME/addnode
- ./addnode.sh "CLUSTER_NEW_NODES={lnx03}"
- Node Selection: [x] lnx03
Setup SSH connectivity for oracle user and lnx03. - Prerequisites Checks
Resolve any issues identified then continue. - Summary
If all looks good press Install button. - Install Product
Process runs... You will be prompted to run root script on lnx03.
Set .bashrc
Set BASH profile?.
Run dbca
- From lnx01 as the oracle user.
- cd $ORACLE_HOME
- dbca &
- Database Operation: (x) oracle RAC database Instance Management
- Instance Operation: (x) Add an instance
- Select Database:
(x) oradb (Local instance: oradb1...)
Username: sys
Password: ******** - Instance Details:
Instance name: oradb3
Node name: lnx03 - Summary: Finish
- Progress Page
Initially blank...then shows progress.
You can tail the Alert log:
Ex: /u01/app/oracle/diag/rdbms/oradb/oradb3/trace/alert_oradb3.log - Finish
Should show: Instance "oradb3" added successfully on "lnx03".
Delete a Node
- Run dbca to Delete Instance.
- Delete Cluster Entries
Ensure Node to Delete is Unpinned
Run from lnx01 as grid.
olsnodes -s -t ... lnx03Active Unpinned
If the node is pinned, then run the crsctl unpin css commands.
Deinstall the Oracle Clusterware Home
From lnx03 grid user.
cd /tmp $ORACLE_HOME/deinstall/deinstall -local
- You may be prompted to run scripts as root user during the process.
- Caution: If you do not specify the -local flag, then the command removes the Oracle Grid Infrastructure home from every node in the cluster!
Delete Node
From Node 1 (lnx01) as root.
export ORACLE_HOME=/u01/app/12.2.0.1/grid export GRID_HOME=/u01/app/12.2.0.1/grid cd $GRID_HOME/bin ./crsctl delete node -n lnx03
Verify Deleted OK.
From Node 1 (lnx01) as grid user. cluvfy stage -post nodedel -n lnx03 -verbose
Ensure VIP Cleared
From Node 1 (lnx02) as root.
export ORACLE_HOME=/u01/app/12.2.0.1/grid export GRID_HOME=/u01/app/12.2.0.1/grid cd $GRID_HOME/bin ./srvctl config vip -node lnx03
If the VIP still exists, then delete it:
./srvctl stop vip -node lnx03 ./srvctl remove vip -vip lnx03 -f
Post Removal Tasks
- Verify Cluster Nodes using: olsnodes.
- Remove lnx03 entries from /etc/hosts.
- Update DNS (bounce dnsmasq if used).
Notes
I noticed after removing a node that in starting the database instances they came up slow but ok the very first time after I did this (there were no errors in alog). I think ASM might have been busy...TBD.
Oracle's docs on deleting a node. Also see: Doc ID 1377349.1
Installation Time Benchmarks
• Initial GI: 20 min. • 2nd root script (node1): 10 min. • 2nd root script (node2): 5 min. • Remainder (Upd Inv, OraNet, ASM, GIMR...): 35 min. Total time to install GI with 2 nodes: 1.10 hours.
Alternative Storage Models (minimum values)
Older Hardware
This model is useful if you have inferior hardware or cannot use the External disk option:
VBox | Size (gb) | Linux | ASMFD | ASM Group |
---|---|---|---|---|
ASM_GRID | 12 | sdb | disk01 | GRID |
ASM_MGMT | 40 | sdc | disk02 | MGMT |
ASM_DATA | 16 | sdd | disk03 | DATA |
ASM_FRA | 16 | sde | disk04 | FRA |
• MGMT: Grid Infrastructure Management Repository (GIMR).
• GRID: OCR (Oracle Cluster Registry) & Voting Files.
Mini
This can be useful to test Oracle patches and the like.
VBox | Size (gb) | Linux | ASMFD | ASM Group |
---|---|---|---|---|
ASM_GRID | 50 | sdb | disk01 | GRID |
- Use the GRID ASM disk group for everything (in place of DATA and FRA etc.).
- 12c ASM findings: GI uses 6gb initially. After base DB created a total of 20gb used.
- Using Defraggler, I found disk performance for a test RAC good. DBCA DB creation took 30 min.
RAC Resources
Oracle checklists: Hardware | OS | Server Configuration.
Common Errors
GI Install Error
[INS-06003] Failed to setup passwordless SSH connectivity with the following node(s): [lnx02, lnx01]
Solutions
Option 1: Bypass DNS
1. Validate If SSH is Slow. lnx01] grid> ssh lnx02 2. If password prompt takes longer than 10 seconds bypass DNS on all nodes as shown below: vi /etc/ssh/sshd_config and Add AddressFamily inet UseDNS no service sshd restart
Option 2: Update openssh
yum install openssh -y
Adapters: VBox -> Linux (example)
Adpater 3, Internal Net, pubnet enp0s9: inet 192.168.56.71 netmask 255.255.255.0 ether 08:00:27:1c:b3:cd Adpater 4, Internal Net, privnet enp0s10: inet 192.168.10.1 netmask 255.255.255.0 ether 08:00:27:3a:a6:5a
Clear Removed Disk Session
Open DOS Console As Administrator.
cd C:\Program Files\Oracle\VirtualBox vboxmanage list hdds > disks.txt notepad disks.txt Get UUID From Above vboxmanage closemedium disk <uuid> --delete Ex: vboxmanage closemedium disk 45522cd9-48ad-4656-9ce4-d3b44ed1ebd6 --delete
Template: Interfaces and Udev
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 1 NAT endp0s3 MAC_Here 2 Host-Only endp0s8 MAC_Here 3 Internal: pubnet endp0s9 MAC_Here 4 Internal: privnet endp0s10 MAC_Here - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - sdb Device_ID_Here sdc Device_ID_Here sdd Device_ID_Here - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -