Oracle 18c GI Cluster Installation
TOC
Overview
What follows is the most common way to install the Oracle 18c Grid Infrastructure (GI) for Linux enterprise environments. These instructions assume ASMFD used to configure and manage your disks. Change values in the examples to match your environment.
Prerequisites
- LINUX OS Prerequisites performed.
- Linux disk partitions have been created (fdisk).
- Disks configured using ASMFD.
- If ASMFD not used, UDev configured to manage disks.
- Download files to your local disk.
- If GI is for RAC run the Cluster Verification Utility.
Directory and File Prep
As root user on Node1:
su - grid
cp LINUX.X64_180000_grid_home.zip /u01/app/18.3.0.0.0/grid/
cd /u01/app/18.3.0.0.0/grid
unzip LINUX.X64_180000_grid_home.zip
This extracts the software into the directory where you want your GI_HOME. The GI process will copy it to the other nodes as required.
GI Installation
- Ensure other initial nodes are up and can connect OK (ping, nslookup etc.).
- Run the Cluster Verification Script.
grid> /u01/app/18.3.0.0.0/grid/bin/cluvfy stage -pre dbcfg -fixup -n lnx01,lnx02 -d /u01/app/oracle/product/18.3.0.0.0/dbhome_1
- ETA on install about 1.5 hours for a 2-node RAC.
Procedure
As grid user from node1:
export ORACLE_HOME=/u01/app/18.3.0.0.0/grid cd $ORACLE_HOME ./gridSetup.sh
Configuration Option
🖸 Configure Oracle Grid Infrastructure for a New Cluster
Cluster Configuration
🖸 Configure an Oracle Standalone Cluster
Product Languages
Press
Grid Plug and Play
Cluster Name: rac-cluster SCAN Name: rac-scan SCAN Port: 1521 ☐ Configure GNS
Cluster Node Information
Should see current system. Example: rac01 | rac01-vip 🖸 Add a single node Public Hostname: lnx02.local Node Role: HUB Virtual Hostname: lnx02-vip.local OS Username: grid OS PW: ******** ... Msg displayed: "Successfully established passwordless SSH connectivity between the selected nodes." ... Shows OK (already established) ... Verification process runs for a moment
Network Interface Usage
192.168.56.0 = Public (pubnet) 192.168.10.0 = ASM & Private (privnet)
Storage Option
🖸 Configure ASM using block devices
Grid Infrastructure Management Repository (GIMR)
Create a separate ASM group for GIMR? 🖸 No
The GIMR resources will be placed in the GRID ASM disk group.
Create ASM Disk Group
Disk group name: GRID Redundancy: 🖸 External = if not needing ASM based redundancy. 🖸 Normal = if needing ASM based redundancy. Allocation Unit Size = 4MB OCR and Voting disk data... Select Disks ☑ /dev/sd_ <= Select disk(s) to be used for the +GRID ASM group. ☑ Configure Oracle ASM Filter Driver
Other disks (for +DATA, +FRA...) added later.
Do Not Enable if Using UDev!
☐ Configure Oracle ASM Filter Driver
GIMR Data Disk Group (requires 40gb)
Disk group name: MGMT
☐ /dev/sd_ <= Select disk(s) to be used for the GIMR (+MGMT).
If separate GIMR disk group enabled you will get this prompt.
ASM Password
🖸 Use same passwords for these accounts.
Failure Isolation
🖸 Do not use Intelligent Platform Management Interface (IPMI)
Management Options
☐ Register with Enterprise Manager (EM) Cloud Control
Operating System Groups
Oracle ASM Administrator: asmadmin Oracle ASM DBA: asmdba Oracle ASM Operator asmoper
Installation Location
Oracle base: /u01/app/grid
Software location: u01/app/18.3.0.0.0/grid (displayed)
Create Inventory
/u01/app/oraInventory
Root script execution
☐ Automatically run configuration scripts
Disabling this makes it easier to debug.
Prerequisites Check
Fix any issues then return to this point. If everything acceptable then press
Summary
Press
Process runs...
• If you run top on your remote node you should see the grid user running ractrans commands when remote operations running. • You will be prompted to run root scripts on each node. /u01/app/oraInventory/orainstRoot.sh /u01/app/18.3.0.0.0/grid/root.sh Look for the below type entry at the end of process to ensure it ran OK: CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded Select when GI installation process completed.
Log Locations
During Install
- /tmp/GridSetupActions<Date-Time>/sshsetup1_<Date-Time>.log
After Install
- /u01/app/grid/diag/crs/<node>/crs/trace/alert.log
- /u01/app/grid/crsdata/<node>/crsconfig/rootcrs_<node>_<date>.log
Create grid BASH User Profile (.bashrc)
Uncomment ORACLE_SID entry setting it to the ASM instance name on this system (ASM1, ASM2 ...).
umask 022 # Get the aliases and functions (if .bash_profile) #if [ -f ~/.bashrc ]; then # . ~/.bashrc #fi # Global Definitions if [ -f /etc/bashrc ]; then . /etc/bashrc fi if [ $USER = "oracle" ] || [ $USER = "grid" ]; then if [ $SHELL = "/bin/ksh" ]; then ulimit -p 16384 ulimit -n 65536 else ulimit -u 16384 -n 65536 fi umask 022 fi #TBD ORACLE_SID=+ASM1; export ORACLE_SID ORACLE_BASE=/u01/app/grid; export ORACLE_BASE ORACLE_HOME=/u01/app/18.3.0.0.0/grid; export ORACLE_HOME ORACLE_TERM=xterm; export ORACLE_TERM PATH=$ORACLE_HOME/bin:/usr/bin:/bin:/usr/local/bin:.local/bin:$HOME/bin export PATH export TEMP=/tmp export TMPDIR=/tmp # Aliases - Common alias cl='clear;crontab -l' alias l9='ls -alt | head -9' alias l20='ls -alt | head -20' alias l50='ls -alt | head -50' alias tf='date;ls -l|wc -l' # Grid alias asmlog='tail -f $ORACLE_BASE/log/diag/asmcmd/user_grid/$HOSTNAME/alert/alert.log' alias clog='tail -f $ORACLE_BASE/diag/crs/$HOSTNAME/crs/trace/alert.log' alias cdbase='cd $ORACLE_BASE' alias cdhome='cd $ORACLE_HOME' alias cdadmin='cd $ORACLE_BASE/diag/asm/$ORACLE_SID*/$ORACLE_SID*/trace' alias cdnet='cd $ORACLE_HOME/network/admin' alias sqp='rlwrap sqlplus / as sysasm' alias src='source $HOME/.bashrc'
ASM Configuration Assistant
After the GI has been installed and BASH grid user profile updated you can use the ASM Configuration Assistant (asmca) to manage your ASM environment. You'll want to create the remainder of your ASM disks for instance.
To launch run: [grid] asmca &
Create Disk Groups
- Select Disk Groups
- Select the [Create] button.
- Disk Group Name: DATA
- Redundancy: 🖸 External
- ☑ AFD:DISK02 <= example
- button.
- Select Disk Groups
- Select the [Create] button.
- Disk Group Name: FRA
- Redundancy: 🖸 External
- ☑ AFD:DISK03 <= example
- button.
Cluster Verification Script DL
INS-06006 Passwordless SSH
Clean up the files in .ssh directory for grid user in VNC session:
cd $HOME/.ssh rm -rf *