Oracle 12c (12.1.0.2) GI Cluster Installation: Linux
Overview
What follows is the most common way to install the Oracle 12c Grid Infrastructure (GI) for Linux enterprise environments. Adjust as required for your environment.
TOC
Prerequisites
- You have performed the LINUX OS Prerequisites.
- Confirm shared disks available and have correct privileges. The links should be owned by root and devices by grid:asmadmin.
- ls -al /dev/asm*
- ls -rlt /dev/sd?1
- Download files to your local disk.
- Identify which interface (ex: eth0) is public and which is private.
- If GI is for RAC run the Cluster Verification Utility.
Make sure Oracle inventory created on all nodes as so:
- mkdir /u01/app/oraInventory
- chown grid:oinstall /u01/app/oraInventory
- chmod -R 775 /u01/app/oraInventory
On all remote nodes cloned from node 1:
su -
cd /u01/app
rm -rf /u01/app/12.1.0.2
rm -rf /u01/app/grid
rm -rf /u01/app/oracle
This is done as the GI process copies the required files to each node. It needs to be pristine to work correctly.
GI Installation: For Standard Cluster
- Log into node 1 as the grid user.
- Ensure other initial nodes are up and can ping and nslookup each other.
- Monitor the process from the root scripts watching the alert log. Example:
tail -f /u01/app/grid/diag/crs/rac02/crs/trace/alert.log
Procedure
- cd /u01/orasw/grid
- ./runInstaller
Installation Option
🖸 Install and Configure Oracle GI for a Cluster
Cluster Type
🖸 Configure a Standard cluster
Installation Type
🖸 Advanced Installation
Product Languages
[Next]
Grid Plug and Play
Cluster Name: cluster-alfa
SCAN Name: scan-alfa
SCAN Port: 1521
☐ Configure GNS
See Database Names.
Cluster Node Information
Should see current system. Example: lnx01 | lnx01-vip [Add] Public Hostname: lnx02.local Virtual Hostname: lnx02-vip.local [SSH connectivity] OS Username: grid OS PW: ******** [Setup] ... Msg displayed: Successfully established passwordless SSH connectivity between the selected nodes. [Next] ... Verification process runs for a moment
Network Interface Usage
Ensure the Interface name (ex: eth0) correctly matches the network you choose.
192.168.56.0 = Public (pubnet)
192.168.10.0 = Private (privnet)
Storage Option
🖸 Use standard ASM for storage
Create ASM Disk Group
If Udev used [Change Discovery Path] to: /dev/asm-disk* Disk group name: GRID Select Redundancy & Disks for Group: Generally use: 🖸 External = If not needing ASM based redundancy. 🖸 Normal = If needing ASM based redundancy. If you don't trust your SAN use this option. Allocation Unit Size = 1mb Add Disks ☑ /dev/asm-disk01 🠈 Select disk(s) to be used for the GI (+GRID). ☐ /dev/asm-disknn ...Other disks for +DATA and +FRA added later.
ASM Password
🖸 Use same passwords for these accounts.
Failure Isolation
🖸 Do not use Intelligent Platform Management Interface (IPMI)
Management Options
☐ Register with Enterprise Manager (EM) Cloud Control
Operating System Groups
Oracle ASM Administrator: asmadmin Oracle ASM DBA: asmdba Oracle ASM Operator asmoper
Installation Location
Oracle base: /u01/app/grid Software location: /u01/app/12.1.0.2/grid
Create Inventory
/u01/app/oraInventory
Root script execution
☐ Automatically run configuration scripts Disabling this makes it easier to debug.
Prerequisites Check
Ignorable Warnings. Fix any other issues then return to this point. If everything acceptable then [Next]
Summary
[Install]
Process runs...
• If you run top on your remote node you should see the grid user running ractrans commands when remote operations running. • You will be prompted to run root scripts. /u01/app/oraInventory/orainstRoot.sh /u01/app/12.1.0.2/grid/root.sh Look for the below type entry at the end of process to ensure it ran OK: CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded Log example to monitor progress of root script (takes a minute to start): tail -f /u01/app/grid/diag/crs/rac01/crs/trace/alert.log Select [Close] when GI installation process completed.
QC
Check log from GI install for any errors. Example:
/u01/app/oraInventory/logs/AttachHome2018-06-22_07-21-37AM.log.lnx02
Create grid BASH User Profile (.bashrc)
Uncomment ORACLE_SID entry setting it to the ASM instance name on this system (ASM1, ASM2 ...).
umask 022 # Global Definitions if [ -f /etc/bashrc ]; then . /etc/bashrc fi if [ $USER = "oracle" ] || [ $USER = "grid" ]; then if [ $SHELL = "/bin/ksh" ]; then ulimit -p 16384 ulimit -n 65536 else ulimit -u 16384 -n 65536 fi umask 022 fi #TBD export ORACLE_SID=+ASM1; export ORACLE_BASE=/u01/app/grid; export ORACLE_HOME=/u01/app/12.1.0.2/grid; export ORACLE_TERM=xterm; export PATH=$ORACLE_HOME/bin:/usr/bin:/bin:/usr/local/bin:.local/bin:$HOME/bin export TEMP=/tmp export TMPDIR=/tmp # Aliases - Common alias cl='crontab -l' alias l9='ls -alt | head -9' alias l20='ls -alt | head -20' alias l50='ls -alt | head -50' alias tf='date;ls -l|wc -l'
ASM Configuration Assistant
After the GI has been installed you can use the ASM Configuration Assistant to manage your ASM environment. You'll want to create the remainder of your ASM disks for instance.
Create Disk Groups
If Udev used [Change Discovery Path] to: /dev/asm-disk*
Launch ASMCA
su - grid
OS> asmca &
Create DATA Group
Select [Create]
Disk Group Name: DATA
Redundancy: 🖸 External
☑ /dev/asm-disk02 🠈 Select corresponding disk(s) for group.
[OK]
Create FRA Group
Select [Create]
Disk Group Name: FRA
Redundancy: 🖸 External
☑ /dev/asm-disk03 🠈 Select corresponding disk(s) for group.
[OK]
INS-06006 Passwordless SSH
Clean up the files in .ssh directory for grid user in VNC session:
cd $HOME/.ssh rm -rf *
In the OUI session, click SSH "Setup" again to setup SSH for grid user from scratch, then click "Test" in OUI, Now it completes with "SSH for user grid has already been setup".
Click "Next" in the OUI to complete the installation.
Example /etc/hosts
#public: 192.168.56.71 lnx01.localdomain lnx01 192.168.56.72 lnx02.localdomain lnx02 #192.168.56.73 lnx03.localdomain lnx03 #private: 192.168.10.1 lnx01-priv.localdomain lnx01-priv 192.168.10.2 lnx02-priv.localdomain lnx02-priv #192.168.10.3 lnx03-priv.localdomain lnx03-priv #virtual: 192.168.56.81 lnx01-vip.localdomain lnx01-vip 192.168.56.82 lnx02-vip.localdomain lnx02-vip #192.168.56.83 lnx03-vip.localdomain lnx03-vip #Scan: 192.168.56.91 rac-scan.localdomain rac-scan 192.168.56.92 rac-scan.localdomain rac-scan 192.168.56.93 rac-scan.localdomain rac-scan
Ignorable Warnings
Task resolv.conf Integrity
/etc/resolv.conf may not be able to resolve some VirtualBox IPs.
/dev/shm mounted as temporary file system
PRVE-0421 : No entry exists in /etc/fstab for mounting /dev/shm
False positive when using Linux 7.x.
PRVF-9802 : Attempt to get 'udev' information from node "lnx02" failed No UDEV rule found for device(s) specified.
Udev checks should show other node rules file and disk configured OK.