Oracle 19c GI Cluster Installation
TOC
- Directory and File Prep
- GI Installation
- Log Locations
- Post Installation Actions
- Grid User BASH Profile
- QC Checks: Grid Status | Example | CVU
- ASM Configuration Assistant (asmca) 🠊 +DATA and +FRA
Overview
What follows is the most common way to install the Oracle 19c Grid Infrastructure (GI) for Linux enterprise environments. These instructions assume ASMFD used to configure and manage your disks. Change values in the examples to match your environment.
Prerequisites
- LINUX OS Prerequisites performed.
- Linux disk partitions have been created (fdisk).
- Disks configured using: ASMFD or UDev
- Download files to your local disk.
- If GI is for RAC run the Cluster Verification Utility.
Directory and File Prep
If directories not already prepped perform the below.
Node 1
su - mkdir -p /u01/app/19.3.0.0.0/grid chown grid:oinstall /u01/app/19.3.0.0.0/grid su - grid cd /u01/orasw cp LINUX.X64_193000_grid_home.zip /u01/app/19.3.0.0.0/grid/ cd /u01/app/19.3.0.0.0/grid unzip LINUX.X64_193000_grid_home.zip
- This extracts the software into the directory where you want your GI_HOME. The GI process will copy it to the other nodes as required.
- If you used initLnxOra script this has already been done.
Nodes 2...
su -
cd /u01/app
rm -rf /u01/app/12.1.0.2
rm -rf /u01/app/grid
rm -rf /u01/app/19.3.0.0.0
rm -rf /u01/app/oracle
rm -rf /u01/app/oraInventory
- The purpose of the above is to clean up any files or directories if this node was cloned from Node 1.
- This assumes the other nodes are not running any other Oracle or GI software. They should not be!
- clear; rm -rf /u01/app/12.1.0.2; rm -rf /u01/app/grid; rm -rf /u01/app/19.3.0.0.0; rm -rf /u01/app/oracle; rm -rf /u01/app/oraInventory; echo "OK"
GI Installation
- Ensure other initial nodes are up and can connect OK (ping, nslookup etc.).
- Run the Cluster Verification Script.
grid> /u01/app/19.3.0.0.0/grid/bin/cluvfy stage -pre dbcfg -fixup -n lnx01,lnx02 -d /u01/app/oracle/product/19.3.0.0.0/dbhome_1
- ETA on install about 1.5 hours for a 2-node RAC.
Procedure
As grid user from node1:
If Linux 8 run:
export SRVM_DISABLE_MTTRANS=true
export CV_ASSUME_DISTID=OEL7.8
export ORACLE_HOME=/u01/app/19.3.0.0.0/grid
cd $ORACLE_HOME
./gridSetup.sh
Configuration Option
🖸 Configure Oracle Grid Infrastructure for a New Cluster
Cluster Configuration
🖸 Configure an Oracle Standalone Cluster
Product Languages
Press Next
Grid Plug and Play
Cluster Name: cluster-alfa
SCAN Name: scan-alfa
SCAN Port: 1521
☐ Configure GNS
See Database Names.
Cluster Node Information
Should see current system. Example: rac01 | rac01-vip Add 🖸 Add a single node Public Hostname: lnx02.local Virtual Hostname: lnx02-vip.local SSH connectivity OS Username: grid OS PW: ******** Setup ... Msg displayed: "Successfully established passwordless SSH connectivity between the selected nodes." Test ... Shows OK (already established) Next ... Verification process runs for a moment
If VBox and INS-06003 error, try this.
Network Interface Usage
192.168.56.0 = Public (pubnet) 192.168.10.0 = ASM & Private (privnet)
Storage Option
🖸 Use Oracle Flex ASM for storage
Grid Infrastructure Management Repository (GIMR)
Configure Grid Infrastructure Management Repository 🖸 Yes
Grid Infrastructure Management Repository (GIMR)
Do you want to create a separate ASM disk group for the GIMR data? 🖸 No
The GIMR resources will be placed in the GRID ASM disk group (requires 40gb). For a test RAC you may not need this.
Create ASM Disk Group
🠊 If Udev used [Change Discovery Path] to: /dev/asm-disk* Disk group name: GRID Redundancy: 🖸 External = if not needing ASM based redundancy. 🖸 Normal = if needing ASM based redundancy. Allocation Unit Size = 4MB OCR and Voting disk data... Select Disks ☑ /dev/asm-disk01 🠈 Select disk(s) to be used for the GI (+GRID). ☐ /dev/asm-disknn ...Other disks for +DATA and +FRA added later.
Do Not Enable if Using UDev!
☐ Configure Oracle ASM Filter Driver
GIMR Data Disk Group (If GIMR disk group enabled you will get this prompt.)
Disk group name: MGMT
☐ /dev/sd_ <= Select disk(s) to be used for the GIMR (+MGMT).
Requires 40gb.
ASM Password
🖸 Use same passwords for these accounts.
SYS and ASMSNMP will use same password in this example.
Failure Isolation
🖸 Do not use Intelligent Platform Management Interface (IPMI)
Management Options
☐ Register with Enterprise Manager (EM) Cloud Control
Operating System Groups
Oracle ASM Administrator: asmadmin Oracle ASM DBA: asmdba Oracle ASM Operator asmoper
Installation Location
Oracle base: /u01/app/grid
Software location: /u01/app/19.3.0.0.0/grid (displayed)
Create Inventory
/u01/app/oraInventory
Root script execution
☐ Automatically run configuration scripts
Disabling this makes it easier to debug.
Prerequisites Check
Fix any issues then return to this point.
If everything acceptable then press Next
Summary
Press Install
Process runs...
• If you run top on your remote node you should see the grid user running ractrans commands when remote operations running. • You will be prompted to run root scripts on each node. /u01/app/oraInventory/orainstRoot.sh /u01/app/19.3.0.0.0/grid/root.sh For second root script, look for the below type entry at the end of process to ensure it ran OK: CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded Select Close when GI installation process completed.
- INS-20802: Oracle Cluster Verification Utility failed
If on VirtualBox this can be ignored as so:- Select: [OK]
- INS-43080 ... Continue - Select: [Yes]
- Select: [Close]
Log Locations
During Install
- /tmp/GridSetupActions<Date-Time>/sshsetup1_<Date-Time>.log
After Install
- /u01/app/grid/diag/crs/<node>/crs/trace/alert.log
- /u01/app/grid/crsdata/<node>/crsconfig/rootcrs_<node>_<date>.log
Create grid BASH User Profile (.bashrc)
Uncomment ORACLE_SID entry setting it to the ASM instance name on this system (ASM1, ASM2 ...).
umask 022 # Global Definitions if [ -f /etc/bashrc ]; then . /etc/bashrc fi if [ $USER = "oracle" ] || [ $USER = "grid" ]; then if [ $SHELL = "/bin/ksh" ]; then ulimit -p 16384 ulimit -n 65536 else ulimit -u 16384 -n 65536 fi fi #TBD export ORACLE_SID=+ASM1; export ORACLE_BASE=/u01/app/grid; export ORACLE_HOME=/u01/app/19.3.0.0.0/grid; export ORACLE_TERM=xterm; export PATH=$ORACLE_HOME/bin:/usr/bin:/bin:/usr/local/bin:.local/bin:$HOME/bin export TEMP=/tmp export TMPDIR=/tmp # Aliases - Common alias cl='clear;crontab -l' alias l9='ls -alt | head -9' alias l20='ls -alt | head -20' alias l50='ls -alt | head -50' alias tf='date;ls -l|wc -l' # Grid alias asmlog='tail -f $ORACLE_BASE/log/diag/asmcmd/user_grid/$HOSTNAME/alert/alert.log' alias clog='tail -f $ORACLE_BASE/diag/crs/$HOSTNAME/crs/trace/alert.log' alias cdbase='cd $ORACLE_BASE' alias cdhome='cd $ORACLE_HOME' alias cdadmin='cd $ORACLE_BASE/diag/asm/$ORACLE_SID*/$ORACLE_SID*/trace' alias cdnet='cd $ORACLE_HOME/network/admin' alias sqp='rlwrap sqlplus / as sysasm' alias src='source $HOME/.bashrc'
ASM Configuration Assistant
After the GI has been installed and BASH grid user profile updated you can use the ASM Configuration Assistant (asmca) to manage your ASM environment. You'll want to create the remainder of your ASM disks for instance (DATA and FRA).
To launch run: [grid] asmca &
Create Disk Groups
- Select: Disk Groups
- Select the Create button.
- Disk Group Name: DATA
- Redundancy: 🖸 External
- ☑ AFD:DISK02 🠈 ASMFD Example
- ☑ /dev/asm-disk02 🠈 UDev Example
- OK button.
- Select: Disk Groups (if not already done)
- Select the Create button.
- Disk Group Name: FRA
- Redundancy: 🖸 External
- ☑ AFD:DISK03 🠈 ASMFD Example
- ☑ /dev/asm-disk03 🠈 UDev Example
- OK button.
Cluster Verification Script DL
INS-06006 Passwordless SSH
Clean up the files in .ssh directory for grid user in VNC session:
cd $HOME/.ssh rm -rf *
GI Install Benchmarks
Initial install: 10 minutes root script (2nd) - Node 1: 10 minutes root script (2nd) - Each additional node: 5 minutes Remainder of installation: 45 minutes.
A two node GI install takes about 1 hour on a modern system.
Cluster Services
Examples after a successful install.
grid> crsctl status res -t -init
-------------------------------------------------------------------------------- Name Target State Server State details -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- ora.asm 1 ONLINE ONLINE lnx01 Started,STABLE ora.cluster_interconnect.haip 1 ONLINE ONLINE lnx01 STABLE ora.crf 1 ONLINE ONLINE lnx01 STABLE ora.crsd 1 ONLINE ONLINE lnx01 STABLE ora.cssd 1 ONLINE ONLINE lnx01 STABLE ora.cssdmonitor 1 ONLINE ONLINE lnx01 STABLE ora.ctssd 1 ONLINE ONLINE lnx01 OBSERVER,STABLE ora.diskmon 1 OFFLINE OFFLINE STABLE ora.evmd 1 ONLINE ONLINE lnx01 STABLE ora.gipcd 1 ONLINE ONLINE lnx01 STABLE ora.gpnpd 1 ONLINE ONLINE lnx01 STABLE ora.mdnsd 1 ONLINE ONLINE lnx01 STABLE ora.storage 1 ONLINE ONLINE lnx01 STABLE --------------------------------------------------------------------------------
grid> crsctl check cluster -all
************************************************************** lnx01: CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online ************************************************************** lnx02: CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online **************************************************************
Common Errors
INS-20802 Oracle Cluster Verification Utility failed
If on VirtualBox this can be ignored as so:
- [OK]
- [INS-43080] ... Continue [Yes]
- [Close]
INS-06006
If OpenSSH Is Upgraded to 8.x Workaround (per Doc ID 2555697.1)
As root user (change the path if the location of your "scp" is not the same with below):
# Rename the original scp. mv /usr/bin/scp /usr/bin/scp.orig # Create a new file </usr/bin/scp> vi /usr/bin/scp # Add the below line to the new created file </usr/bin/scp> /usr/bin/scp.orig -T $* # Change the file permission. chmod 555 /usr/bin/scp
After installation:
mv /usr/bin/scp.orig /usr/bin/scp
- Actually, you may wish to mv it back after you perform the database product install and database creation.
- After base install of both GI and database, applying the latest patches and that will fix this.
Otherwise (per Doc ID 2070270.1)
Option 1
1. Clean up the files in .ssh directory for grid user. cd $HOME/.ssh rm -rf * 2. In the OUI session, click SSH "Setup" again to setup SSH for grid user. 3. Click "Next" in the OUI to complete the installation.
Option 2
Another workaround is to set the following entry in <userhome>/.bashrc profile: %% export SSH_AUTH_SOCK=0
Then re-login ensuring SSH_AUTH_SOCK set.
Manual Steps to Perform RU
This example is for a non-shared CRS and DB home environment not using ACFS.
Execute the following on each node of the cluster to apply the patch.
1. Stop the CRS managed resources running from DB homes. If this is a GI Home environment, as the database home owner execute: $ <ORACLE_HOME>/bin/srvctl stop home -o <ORACLE_HOME> -s <status file location> -n <node name> 2. Run the pre root script. If this is a GI Home, as the root user execute: # <GI_HOME>/crs/install/rootcrs.sh -prepatch 3. Patch GI home. As the GI home owner execute: $ <GI_HOME>/OPatch/opatch apply -oh <GI_HOME> -local <UNZIPPED_PATCH_LOCATION>/%BUGNO%/%OCW TRACKING BUG% $ <GI_HOME>/OPatch/opatch apply -oh <GI_HOME> -local <UNZIPPED_PATCH_LOCATION>/%BUGNO%/%ACFS TRACKING BUG% $ <GI_HOME>/OPatch/opatch apply -oh <GI_HOME> -local <UNZIPPED_PATCH_LOCATION>/%BUGNO%/%DB WLM TRACKING BUG% $ <GI_HOME>/OPatch/opatch apply -oh <GI_HOME> -local <UNZIPPED_PATCH_LOCATION>/%BUGNO%/%DB RU TRACKING BUG% $ <GI_HOME>/OPatch/opatch apply -oh <GI_HOME> -local <UNZIPPED_PATCH_LOCATION>/%BUGNO%/%TOMCAT RU TRACKING BUG% 4. Patch DB home. As the database home owner execute: $ <UNZIPPED_PATCH_LOCATION>/%BUGNO%/%OCW TRACKING BUG%/custom/scripts/prepatch.sh -dbhome <ORACLE_HOME> $ <ORACLE_HOME>/OPatch/opatch apply -oh <ORACLE_HOME> -local <UNZIPPED_PATCH_LOCATION>/%BUGNO%/%OCW TRACKING BUG% $ <ORACLE_HOME>/OPatch/opatch apply -oh <ORACLE_HOME> -local <UNZIPPED_PATCH_LOCATION>/%BUGNO%/%DB RU TRACKING BUG% $ <UNZIPPED_PATCH_LOCATION>/%BUGNO%/%OCW TRACKING BUG%/custom/scripts/postpatch.sh -dbhome <ORACLE_HOME> 5. Run the post script. As the root user execute: # <GI_HOME>/rdbms/install/rootadd_rdbms.sh If this is a GI Home, as the root user execute: # <GI_HOME>/crs/install/rootcrs.sh -postpatch 6. If the message, "A system reboot is recommended before using ACFS is shown, then a reboot must be issued before continuing. Failure to do so will result in running with an unpatched ACFS\ADVM\OKS driver. 7. Start the CRS managed resources that were earlier running from DB homes. If this is a GI Home environment, as the database home owner execute: $ <ORACLE_HOME>/bin/srvctl start home -o <ORACLE_HOME> -s <status file location> -n <node name> 8. For each database instance running on the Oracle home being patched, run the datapatch utility as described in next table. Table 3: Steps to Run the datapatch Utility for Single Tenant Versus Multitenant (CDB/PDB) sqlplus /nolog Connect / as sysdba startup quit cd $ORACLE_HOME/OPatch ./datapatch -verbose
Source: Doc ID 2246888.1