oracledba.help

Oracle 19c GI Cluster Installation

<- Install

TOC

Overview

What follows is the most common way to install the Oracle 19c Grid Infrastructure (GI) for Linux enterprise environments. These instructions assume ASMFD used to configure and manage your disks. Change values in the examples to match your environment.

Prerequisites

Directory and File Prep

If directories not already prepped perform the below.

Node 1

 su -
 mkdir -p /u01/app/19.3.0.0.0/grid
 chown grid:oinstall /u01/app/19.3.0.0.0/grid

 su - grid
 cd /u01/orasw
 cp LINUX.X64_193000_grid_home.zip /u01/app/19.3.0.0.0/grid/
 cd /u01/app/19.3.0.0.0/grid
 unzip LINUX.X64_193000_grid_home.zip
  • This extracts the software into the directory where you want your GI_HOME. The GI process will copy it to the other nodes as required.
  • If you used initLnxOra script this has already been done.

Nodes 2...

 su -
 cd /u01/app
 rm -rf /u01/app/12.1.0.2
 rm -rf /u01/app/grid
 rm -rf /u01/app/19.3.0.0.0
 rm -rf /u01/app/oracle
 rm -rf /u01/app/oraInventory
  • The purpose of the above is to clean up any files or directories if this node was cloned from Node 1.
  • This assumes the other nodes are not running any other Oracle or GI software. They should not be!
  • clear; rm -rf /u01/app/12.1.0.2; rm -rf /u01/app/grid; rm -rf /u01/app/19.3.0.0.0; rm -rf /u01/app/oracle; rm -rf /u01/app/oraInventory; echo "OK"

GI Installation

  • Ensure other initial nodes are up and can connect OK (ping, nslookup etc.).
  • Run the Cluster Verification Script.
    grid> /u01/app/19.3.0.0.0/grid/bin/cluvfy stage -pre dbcfg -fixup 
          -n lnx01,lnx02 -d /u01/app/oracle/product/19.3.0.0.0/dbhome_1
  • ETA on install about 1.5 hours for a 2-node RAC.

Procedure

As grid user from node1:

 export ORACLE_HOME=/u01/app/19.3.0.0.0/grid
 cd $ORACLE_HOME
 ./gridSetup.sh

Configuration Option

 πŸ–Έ Configure Oracle Grid Infrastructure for a New Cluster

Cluster Configuration

 πŸ–Έ Configure an Oracle Standalone Cluster

Product Languages

 Press 

Grid Plug and Play

 Cluster Name: cluster-alfa
 SCAN Name:    scan-alfa
 SCAN Port:    1521
 ☐ Configure GNS
 See Database Names.

Cluster Node Information

 Should see current system. Example: rac01 | rac01-vip 
 
 πŸ–Έ Add a single node
 Public Hostname:  lnx02.local
 Virtual Hostname: lnx02-vip.local
    
    OS Username: grid 
    OS PW:       ********
     ...
    Msg displayed: 
    "Successfully established passwordless SSH connectivity between the selected nodes."
  ... Shows OK (already established)
  ... Verification process runs for a moment

Network Interface Usage

 192.168.56.0  = Public        (pubnet)
 192.168.10.0  = ASM & Private (privnet)

Storage Option

 πŸ–Έ Use Oracle Flex ASM for storage

Grid Infrastructure Management Repository (GIMR)

 Configure Grid Infrastructure Management Repository
 πŸ–Έ Yes

Grid Infrastructure Management Repository (GIMR)

 Do you want to create a separate ASM disk group for the GIMR data?
 πŸ–Έ No

The GIMR resources will be placed in the GRID ASM disk group (requires 40gb). For a test RAC you may not need this.

Create ASM Disk Group

 🠊 If Udev used [Change Discovery Path] to: /dev/asm-disk*
 Disk group name: GRID
 Redundancy: πŸ–Έ External = if not needing ASM based redundancy.
             πŸ–Έ Normal   = if needing ASM based redundancy.
 Allocation Unit Size = 4MB
 OCR and Voting disk data...

 Select Disks  
   β˜‘ /dev/asm-disk01 🠈 Select disk(s) to be used for the GI (+GRID).
   ☐ /dev/asm-disknn ...Other disks for +DATA and +FRA added later.

Do Not Enable if Using UDev!

☐ Configure Oracle ASM Filter Driver

GIMR Data Disk Group (If GIMR disk group enabled you will get this prompt.)

 Disk group name: MGMT
 ☐ /dev/sd_   <= Select disk(s) to be used for the GIMR (+MGMT).

Requires 40gb.

ASM Password

 πŸ–Έ Use same passwords for these accounts.

SYS and ASMSNMP will use same password in this example.

Failure Isolation

 πŸ–Έ Do not use Intelligent Platform Management Interface (IPMI)

Management Options

 ☐ Register with Enterprise Manager (EM) Cloud Control

Operating System Groups

 Oracle ASM Administrator: asmadmin
 Oracle ASM DBA:           asmdba
 Oracle ASM Operator       asmoper

Installation Location

 Oracle base:       /u01/app/grid
 Software location: /u01/app/19.3.0.0.0/grid (displayed)

Create Inventory

  /u01/app/oraInventory

Root script execution

  ☐ Automatically run configuration scripts
  Disabling this makes it easier to debug.

Prerequisites Check

  Fix any issues then return to this point.
  If everything acceptable then press 

Summary

 Press 

Process runs...

  β€’ If you run top on your remote node you should see the grid user
    running ractrans commands when remote operations running.

  β€’ You will be prompted to run root scripts on each node.
    /u01/app/oraInventory/orainstRoot.sh
    /u01/app/19.3.0.0.0/grid/root.sh

    For second root script, look for the below type entry at the 
    end of process to ensure it ran OK:
    CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded

 Select  when GI installation process completed.

Log Locations

During Install

  • /tmp/GridSetupActions<Date-Time>/sshsetup1_<Date-Time>.log

After Install

  • /u01/app/grid/diag/crs/<node>/crs/trace/alert.log
  • /u01/app/grid/crsdata/<node>/crsconfig/rootcrs_<node>_<date>.log

Create grid BASH User Profile (.bashrc)

Uncomment ORACLE_SID entry setting it to the ASM instance name on this system (ASM1, ASM2 ...).

umask 022

# Global Definitions
if [ -f /etc/bashrc ]; then
   . /etc/bashrc
fi

if [ $USER = "oracle" ] || [ $USER = "grid" ]; then
   if [ $SHELL = "/bin/ksh" ]; then
      ulimit -p 16384
      ulimit -n 65536
   else
      ulimit -u 16384 -n 65536
   fi
fi

#TBD export ORACLE_SID=+ASM1;
export ORACLE_BASE=/u01/app/grid;
export ORACLE_HOME=/u01/app/19.3.0.0.0/grid;
export ORACLE_TERM=xterm;

export PATH=$ORACLE_HOME/bin:/usr/bin:/bin:/usr/local/bin:.local/bin:$HOME/bin

export TEMP=/tmp
export TMPDIR=/tmp

# Aliases - Common
alias cl='clear;crontab -l'
alias l9='ls -alt | head -9' 
alias l20='ls -alt | head -20'
alias l50='ls -alt | head -50'
alias tf='date;ls -l|wc -l'

# Grid
alias asmlog='tail -f $ORACLE_BASE/log/diag/asmcmd/user_grid/$HOSTNAME/alert/alert.log'
alias clog='tail -f $ORACLE_BASE/diag/crs/$HOSTNAME/crs/trace/alert.log'
alias cdbase='cd $ORACLE_BASE'
alias cdhome='cd $ORACLE_HOME'
alias cdadmin='cd $ORACLE_BASE/diag/asm/$ORACLE_SID*/$ORACLE_SID*/trace'
alias cdnet='cd $ORACLE_HOME/network/admin'
alias sqp='rlwrap sqlplus / as sysasm'
alias src='source $HOME/.bashrc'

ASM Configuration Assistant

After the GI has been installed and BASH grid user profile updated you can use the ASM Configuration Assistant (asmca) to manage your ASM environment. You'll want to create the remainder of your ASM disks for instance (DATA and FRA).

To launch run:
[grid] asmca &

Create Disk Groups

  1. Select: Disk Groups
  2. Select the button.
    • Disk Group Name: DATA
    • Redundancy: πŸ–Έ External
    • β˜‘ AFD:DISK02 🠈 ASMFD Example
    • β˜‘ /dev/asm-disk02 🠈 UDev Example
    • button.
  3. Select: Disk Groups (if not already done)
  4. Select the button.
    • Disk Group Name: FRA
    • Redundancy: πŸ–Έ External
    • β˜‘ AFD:DISK03 🠈 ASMFD Example
    • β˜‘ /dev/asm-disk03 🠈 UDev Example
    • button.

Cluster Verification Script DL


INS-06006 Passwordless SSH

Clean up the files in .ssh directory for grid user in VNC session:

 cd $HOME/.ssh
 rm -rf *

GI Install Benchmarks

 Initial install: 10 minutes
 root script (2nd) - Node 1: 10 minutes
 root script (2nd) - Each additional node: 5 minutes
 Remainder of installation: 45 minutes.

A two node GI install takes about 1 hour on a modern system.

Cluster Services

Examples after a successful install.

grid> crsctl status res -t -init

--------------------------------------------------------------------------------
Name           Target  State        Server                   State details       
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.asm
      1        ONLINE  ONLINE       lnx01                    Started,STABLE
ora.cluster_interconnect.haip
      1        ONLINE  ONLINE       lnx01                    STABLE
ora.crf
      1        ONLINE  ONLINE       lnx01                    STABLE
ora.crsd
      1        ONLINE  ONLINE       lnx01                    STABLE
ora.cssd
      1        ONLINE  ONLINE       lnx01                    STABLE
ora.cssdmonitor
      1        ONLINE  ONLINE       lnx01                    STABLE
ora.ctssd
      1        ONLINE  ONLINE       lnx01                    OBSERVER,STABLE
ora.diskmon
      1        OFFLINE OFFLINE                               STABLE
ora.evmd
      1        ONLINE  ONLINE       lnx01                    STABLE
ora.gipcd
      1        ONLINE  ONLINE       lnx01                    STABLE
ora.gpnpd
      1        ONLINE  ONLINE       lnx01                    STABLE
ora.mdnsd
      1        ONLINE  ONLINE       lnx01                    STABLE
ora.storage
      1        ONLINE  ONLINE       lnx01                    STABLE
--------------------------------------------------------------------------------

grid> crsctl check cluster -all

**************************************************************
lnx01:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
lnx02:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************