Oracle 19c GI Cluster Installation

<- Install



What follows is the most common way to install the Oracle 19c Grid Infrastructure (GI) for Linux enterprise environments. These instructions assume ASMFD used to configure and manage your disks. Change values in the examples to match your environment.


Directory and File Prep

If directories not already prepped perform the below.

Node 1

 su -
 mkdir -p /u01/app/
 chown grid:oinstall /u01/app/

 su - grid
 cd /u01/orasw
 cp /u01/app/
 cd /u01/app/
  • This extracts the software into the directory where you want your GI_HOME. The GI process will copy it to the other nodes as required.
  • If you used initLnxOra script this has already been done.

Nodes 2...

 su -
 cd /u01/app
 rm -rf /u01/app/
 rm -rf /u01/app/grid
 rm -rf /u01/app/
 rm -rf /u01/app/oracle
 rm -rf /u01/app/oraInventory
  • The purpose of the above is to clean up any files or directories if this node was cloned from Node 1.
  • This assumes the other nodes are not running any other Oracle or GI software. They should not be!
  • clear; rm -rf /u01/app/; rm -rf /u01/app/grid; rm -rf /u01/app/; rm -rf /u01/app/oracle; rm -rf /u01/app/oraInventory; echo "OK"

GI Installation

  • Ensure other initial nodes are up and can connect OK (ping, nslookup etc.).
  • Run the Cluster Verification Script.
    grid> /u01/app/ stage -pre dbcfg -fixup 
          -n lnx01,lnx02 -d /u01/app/oracle/product/
  • ETA on install about 1.5 hours for a 2-node RAC.


As grid user from node1:

 export ORACLE_HOME=/u01/app/

Configuration Option

 πŸ–Έ Configure Oracle Grid Infrastructure for a New Cluster

Cluster Configuration

 πŸ–Έ Configure an Oracle Standalone Cluster

Product Languages


Grid Plug and Play

 Cluster Name: cluster-alfa
 SCAN Name:    scan-alfa
 SCAN Port:    1521
 ☐ Configure GNS
 See Database Names.

Cluster Node Information

 Should see current system. Example: rac01 | rac01-vip 
 πŸ–Έ Add a single node
 Public Hostname:  lnx02.local
 Virtual Hostname: lnx02-vip.local
    OS Username: grid 
    OS PW:       ********
    Msg displayed: 
    "Successfully established passwordless SSH connectivity between the selected nodes."
  ... Shows OK (already established)
  ... Verification process runs for a moment

Network Interface Usage  = Public        (pubnet)  = ASM & Private (privnet)

Storage Option

 πŸ–Έ Use Oracle Flex ASM for storage

Grid Infrastructure Management Repository (GIMR)

 Configure Grid Infrastructure Management Repository
 πŸ–Έ Yes

Grid Infrastructure Management Repository (GIMR)

 Do you want to create a separate ASM disk group for the GIMR data?
 πŸ–Έ No

The GIMR resources will be placed in the GRID ASM disk group (requires 40gb). For a test RAC you may not need this.

Create ASM Disk Group

 🠊 If Udev used [Change Discovery Path] to: /dev/asm-disk*
 Disk group name: GRID
 Redundancy: πŸ–Έ External = if not needing ASM based redundancy.
             πŸ–Έ Normal   = if needing ASM based redundancy.
 Allocation Unit Size = 4MB
 OCR and Voting disk data...

 Select Disks  
   β˜‘ /dev/asm-disk01 🠈 Select disk(s) to be used for the GI (+GRID).
   ☐ /dev/asm-disknn ...Other disks for +DATA and +FRA added later.

Do Not Enable if Using UDev!

☐ Configure Oracle ASM Filter Driver

GIMR Data Disk Group (If GIMR disk group enabled you will get this prompt.)

 Disk group name: MGMT
 ☐ /dev/sd_   <= Select disk(s) to be used for the GIMR (+MGMT).

Requires 40gb.

ASM Password

 πŸ–Έ Use same passwords for these accounts.

SYS and ASMSNMP will use same password in this example.

Failure Isolation

 πŸ–Έ Do not use Intelligent Platform Management Interface (IPMI)

Management Options

 ☐ Register with Enterprise Manager (EM) Cloud Control

Operating System Groups

 Oracle ASM Administrator: asmadmin
 Oracle ASM DBA:           asmdba
 Oracle ASM Operator       asmoper

Installation Location

 Oracle base:       /u01/app/grid
 Software location: /u01/app/ (displayed)

Create Inventory


Root script execution

  ☐ Automatically run configuration scripts
  Disabling this makes it easier to debug.

Prerequisites Check

  Fix any issues then return to this point.
  If everything acceptable then press 



Process runs...

  β€’ If you run top on your remote node you should see the grid user
    running ractrans commands when remote operations running.

  β€’ You will be prompted to run root scripts on each node.

    For second root script, look for the below type entry at the 
    end of process to ensure it ran OK:
    CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded

 Select  when GI installation process completed.

Log Locations

During Install

  • /tmp/GridSetupActions<Date-Time>/sshsetup1_<Date-Time>.log

After Install

  • /u01/app/grid/diag/crs/<node>/crs/trace/alert.log
  • /u01/app/grid/crsdata/<node>/crsconfig/rootcrs_<node>_<date>.log

Create grid BASH User Profile (.bashrc)

Uncomment ORACLE_SID entry setting it to the ASM instance name on this system (ASM1, ASM2 ...).

umask 022

# Global Definitions
if [ -f /etc/bashrc ]; then
   . /etc/bashrc

if [ $USER = "oracle" ] || [ $USER = "grid" ]; then
   if [ $SHELL = "/bin/ksh" ]; then
      ulimit -p 16384
      ulimit -n 65536
      ulimit -u 16384 -n 65536

export ORACLE_BASE=/u01/app/grid;
export ORACLE_HOME=/u01/app/;
export ORACLE_TERM=xterm;

export PATH=$ORACLE_HOME/bin:/usr/bin:/bin:/usr/local/bin:.local/bin:$HOME/bin

export TEMP=/tmp
export TMPDIR=/tmp

# Aliases - Common
alias cl='clear;crontab -l'
alias l9='ls -alt | head -9' 
alias l20='ls -alt | head -20'
alias l50='ls -alt | head -50'
alias tf='date;ls -l|wc -l'

# Grid
alias asmlog='tail -f $ORACLE_BASE/log/diag/asmcmd/user_grid/$HOSTNAME/alert/alert.log'
alias clog='tail -f $ORACLE_BASE/diag/crs/$HOSTNAME/crs/trace/alert.log'
alias cdbase='cd $ORACLE_BASE'
alias cdhome='cd $ORACLE_HOME'
alias cdadmin='cd $ORACLE_BASE/diag/asm/$ORACLE_SID*/$ORACLE_SID*/trace'
alias cdnet='cd $ORACLE_HOME/network/admin'
alias sqp='rlwrap sqlplus / as sysasm'
alias src='source $HOME/.bashrc'

ASM Configuration Assistant

After the GI has been installed and BASH grid user profile updated you can use the ASM Configuration Assistant (asmca) to manage your ASM environment. You'll want to create the remainder of your ASM disks for instance (DATA and FRA).

To launch run:
[grid] asmca &

Create Disk Groups

  1. Select: Disk Groups
  2. Select the button.
    • Disk Group Name: DATA
    • Redundancy: πŸ–Έ External
    • β˜‘ AFD:DISK02 🠈 ASMFD Example
    • β˜‘ /dev/asm-disk02 🠈 UDev Example
    • button.
  3. Select: Disk Groups (if not already done)
  4. Select the button.
    • Disk Group Name: FRA
    • Redundancy: πŸ–Έ External
    • β˜‘ AFD:DISK03 🠈 ASMFD Example
    • β˜‘ /dev/asm-disk03 🠈 UDev Example
    • button.

Cluster Verification Script DL

INS-06006 Passwordless SSH

Clean up the files in .ssh directory for grid user in VNC session:

 cd $HOME/.ssh
 rm -rf *

GI Install Benchmarks

 Initial install: 10 minutes
 root script (2nd) - Node 1: 10 minutes
 root script (2nd) - Each additional node: 5 minutes
 Remainder of installation: 45 minutes.

A two node GI install takes about 1 hour on a modern system.

Cluster Services

Examples after a successful install.

grid> crsctl status res -t -init

Name           Target  State        Server                   State details       
Cluster Resources
      1        ONLINE  ONLINE       lnx01                    Started,STABLE
      1        ONLINE  ONLINE       lnx01                    STABLE
      1        ONLINE  ONLINE       lnx01                    STABLE
      1        ONLINE  ONLINE       lnx01                    STABLE
      1        ONLINE  ONLINE       lnx01                    STABLE
      1        ONLINE  ONLINE       lnx01                    STABLE
      1        ONLINE  ONLINE       lnx01                    OBSERVER,STABLE
      1        OFFLINE OFFLINE                               STABLE
      1        ONLINE  ONLINE       lnx01                    STABLE
      1        ONLINE  ONLINE       lnx01                    STABLE
      1        ONLINE  ONLINE       lnx01                    STABLE
      1        ONLINE  ONLINE       lnx01                    STABLE
      1        ONLINE  ONLINE       lnx01                    STABLE

grid> crsctl check cluster -all

CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online