oracledba.help

Patching Oracle RAC Databases (GI RU)

<- Install

TOC

Overview

The following instructions cover applying routine patches to an Oracle RAC database environment. Always read the patch docs, this is just a cheat-sheet.

Important

Conflict Checks, Opatchauto and Post Patch Actions need to be done on each node. Do node1 then node2 etc. The Conflict Checks can be done ahead of time to save time though.

Patch Types

There are two types of patches from 12.2 on:

  • Grid Infrastructure Release Update (GI RU).
    For RAC and Oracle Restart environments via opatchauto.
  • Database Release Update (Database RU).
    For stand-alone single instance environments via opatch apply.

These instructions cover patching using the GI RU in an environment where: the GI Home and the Database Homes are not shared and ACFS file system is not configured.

opatchauto Fun Facts!

  • opatchauto brings down the services, applys the patch and restarts the services on the node.
  • opatchauto only patches the individual node.
    So you have to run it on all nodes.
  • opatchauto will patch both the GRID_HOME and ORACLE_HOME on the node.
    So you don't have to explicitly execute it on the GI home then the DB home.
  • opatchauto updates the SQL in the database.
    So you do not need to run datapatch -verbose afterward.
  • opatchauto must be run from the GI Home as the root user.

The above is derived from various MOS cases including: 3-16179254071, 3-16515328131

Prerequisites

You have:

  • Ample disk space.
    df -h
  • The latest OPatch utility.
    $ORACLE_HOME/OPatch/opatch lsinventory
    OPatch version : 12.2.0.1.11
  • A Valid Oracle Inventory.
    $ORACLE_HOME/OPatch/opatch lsinventory -detail -oh $ORACLE_HOME
    OPatch succeeded.
  • Downloaded and unzipped patch.
    • As the grid user, download the patch and unzip it on each node.
    • The directory must be empty and not be /tmp.
    • The directory must have read permission by the oinstall group.
su -
mkdir /u01/orasw/patches 
chmod -R 775 /u01/orasw/patches

Copy zip file to /u01/orasw/patches

chown -R grid:oinstall /u01/orasw/patches/p*.zip
chmod -R 775 /u01/orasw/patches/p*.zip

su - grid
cd /u01/orasw/patches
unzip p26737266_122010_Linux-x86-64.zip
chmod 755 -R /u01/orasw/patches/

Conflict Checks

Run OPatch conflict check for each one-off patch indicated in the patch docs.

CheckConflictAgainstOHWithDetail

As grid user:
export PATCH_BASE=/u01/orasw/patches/26737266
$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir $PATCH_BASE/26710464 $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir $PATCH_BASE/26925644 $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir $PATCH_BASE/26737232 $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir $PATCH_BASE/26839277 $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir $PATCH_BASE/26928563

As oracle user:
export PATCH_BASE=/u01/orasw/patches/26737266
$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir $PATCH_BASE/26710464 $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir $PATCH_BASE/26925644

Look for

 Prereq "checkConflictAgainstOHWithDetail" passed.
 OPatch succeeded.

CheckSystemSpace

As grid user:
Create a text file with each one-off patch indicated in the patch docs.

/u01/orasw/patches/26737266/26928563
/u01/orasw/patches/26737266/26839277
/u01/orasw/patches/26737266/26737232
/u01/orasw/patches/26737266/26925644
/u01/orasw/patches/26737266/26710464

No spaces!

Run Check
grid> $ORACLE_HOME/OPatch/opatch prereq CheckSystemSpace -phBaseFile /tmp/patchlist_grid.txt

As oracle user:
Create a text file with each one-off patch indicated in the patch docs.

/u01/orasw/patches/26737266/26925644
/u01/orasw/patches/26737266/26710464

No spaces!

Run Check
oracle> $ORACLE_HOME/OPatch/opatch prereq CheckSystemSpace -phBaseFile /tmp/patchlist_oracle.txt

One-off Patch Conflict Detection and Resolution

  1. See Knowledge Document 1091294.1, How to use the My Oracle Support Conflict Checker Tool.
  2. Run My Oracle Support Conflict Checker tool.
    Example Session
 Select Platform: Linux x86-64
 Attach File:     lsinventory.txt
 Patch:           Patch#ToBeApplied
 Select:          Analyze for Conflict
 Process runs...

 Select: Download All

For Attach File, create and upload this: grid> $ORACLE_HOME/OPatch/opatch lsinventory > /tmp/lsinventory.txt

opatchauto

Perform the following as the root user.

 export PATH=$PATH:/u01/app/12.2.0.1/grid/OPatch
 /u01/app/12.2.0.1/grid/OPatch/opatchauto apply /u01/orasw/patches/26737266

This phase may take 20-30 minutes per node.

Session Log

You can tail the session log as it is running:

  • Location\Format
    $ORACLE_HOME/cfgtoollogs/opatchauto/opatchauto<date-time>.log
  • Example
    tail -f opatchauto_2018-01-09_14-06-29_binary.log

QC

If opatchauto is successful it will output something similar to this:

 OPatchauto session completed at Wed Nov  8 07:31:36 2017
 Time taken to complete the session 22 minutes, 10 seconds

Check the following log files for errors:

 $ORACLE_BASE/cfgtoollogs/sqlpatch/26737266/<unique patch ID>
 Ex log: 26737266_apply_<database SID>_<CDB name>_<timestamp>.log

Validate Applied Patches

 su - oracle
 $ORACLE_HOME/OPatch/opatch lsinventory > /tmp/patches_oracle.out
 su - grid
 $ORACLE_HOME/OPatch/opatch lsinventory > /tmp/patches_grid.out

The output file will show all the applied patches.

Post Patch Actions

Ensure All Instances Are Up

As the oracle user:

  • srvctl status database -d MyDBName
    srvctl status database -d oradb
  • Perform as needed: srvctl start instance -d MyDBName -i MyInstName
    srvctl start instance -d oradb -i oradb2

Perform Standard RAC Status Checks

Status checks here.








Rollback

To roll back the patch from the GI home and each Oracle RAC database home:
Format: opatchauto rollback <UNZIPPED_PATCH_LOCATION>/26737266

As the root user.

 export PATH=$PATH:/u01/app/12.2.0.1/grid/OPatch
 opatchauto rollback /u01/orasw/patches/26737266

To roll back the patch from the GI home:

 # opatchauto rollback <UNZIPPED_PATCH_LOCATION>/ -oh <path to GI home>  

To roll back the patch from the Oracle RAC database home:

 # opatchauto rollback <UNZIPPED_PATCH_LOCATION>/ -oh <oracle_home1_path>,<oracle_home2_path>

Misc Rollback Scenarios

If Clusterware is not running or not configured. You have 2 options:

  1. Configure and start the Clusterware on this node and re-run the tool
  2. Run the tool with '-oh <GI_HOME>' to first patch the Grid Home, then invoke tool with '-database <oracle database name>' or '-oh <RAC_HOME>' to patch the RAC home.

Option 2 Example

 opatchauto rollback /u01/orasw/patches/26737266 -oh /u01/app/12.2.0.1/grid

If above not possible you can try to execute opatchauto in non-rolling mode.

 /u01/app/12.2.0.1/grid/OPatch/opatchauto rollback 
   /u01/orasw/patches/26737266 
   -oh /u01/app/12.2.0.1/grid -nonrolling

Rollback Interim Patches

su - oracle
$ORACLE_HOME/OPatch/opatch rollback -id 26710464

...
Patching component oracle.precomp.lang, 12.2.0.1.0...
RollbackSession removing interim patch '26710464' from inventory
Log file location: /u01/app/oracle/.../cfgtoollogs/opatch/opatch2017-11-15_13-19-52PM_1.log

OPatch succeeded.

----

$ORACLE_HOME/OPatch/opatch nrollback -local -id 26925644

...
Patching component oracle.has.deconfig, 12.2.0.1.0...
RollbackSession removing interim patch '26925644' from inventory
Log file location: /u01/app/oracle/.../cfgtoollogs/opatch/opatch2017-11-15_13-21-35PM_1.log

OPatch succeeded.

----

$ORACLE_HOME/OPatch/opatch lsinventory -detail -oh $ORACLE_HOME 

...
There are no Interim patches installed in this Oracle Home.

OPatch succeeded.

Patch Related Log File Location

/u01/app/12.2.0.1/grid/cfgtoollogs


Common Errors

Error

 OPATCHAUTO-68021: Missing required argument(s).
 OPATCHAUTO-68021: The following argument(s) are required: [-wallet]
 OPATCHAUTO-68021: Provide the required argument(s).

Solution

Upgrade to the latest version of Opatch (12.2.0.1.9 or higher) which does not require wallet as a mandatory parameter with opatchauto.

Error

 2017/11/16 10:39:47 CLSRSC-117: Failed to start Oracle Clusterware stack 
 After fixing the cause of failure Run opatchauto resume 

 OPATCHAUTO-68061: The orchestration engine failed. 
 OPATCHAUTO-68061: The orchestration engine failed with return code 1 
 OPATCHAUTO-68061: Check the log for more details. 
 OPatchAuto failed. 

 OPatchauto session completed at Thu Nov 16 10:39:49 2017 
 Time taken to complete the session 26 minutes, 26 seconds 

 opatchauto failed with error code 42

Solution

If opatchauto displays this error, but all patch logs show successfully applied perform the following steps:

 1. Shutdown cleanly if the clusterware is running in the failed node: 
    # crsctl stop crs -f

    If the above fails because of partially run postpatch and not able to bring down ohasd: 
    # crsctl disable crs 

    Ensure all clusterware related process are down and nothing is running from the crs home: 

      ps -ef|grep ohasd
      # kill -9 <ohasd.bin pid> 

      Enable clusterware again: 
      # crsctl enable crs 

 2. Unlock the CRS home 
    # rootcrs.sh -unlock

    find /u01/app/12.2.0.1/grid -iname "rootcrs.sh"
    /u01/app/12.2.0.1/grid/crs/install/rootcrs.sh

    cd /u01/app/12.2.0.1/grid/crs/install
    ./rootcrs.sh -unlock

    ...
    2017/11/16 12:26:54 CLSRSC-347: Successfully unlock /u01/app/12.2.0.1/grid

 3. Issue -patch to instantiate and lock the home 
    # rootcrs.sh -postpatch

    cd /u01/app/12.2.0.1/grid/crs/install
    ./rootcrs.sh -postpatch

Manually Applying Interim Patch in Patchset

  • The following is an example using Patchset 26737266 and interim patch 26928563.
  • From the patch docs confirm which interim patch(s) need to be applied and to which home\user.

Change to root user and set environment.

 su -
 export PATH=$PATH:/u01/app/12.2.0.1/grid/OPatch
 export GRID_HOME=/u01/app/12.2.0.1/grid
 export PATH=$GRID_HOME/bin:$PATH

For each Patch In Patchset from appropriate home\user:

  cd /u01/orasw/patches/26737266/26928563
  $GRID_HOME/OPatch/opatchauto apply /u01/orasw/patches/26737266/26928563 -oh $GRID_HOME

Comprehensive Interim Patch Rollback Session

 Determine Which User Homes have Interims to Rollback
 From for both grid and oracle users.
 $ORACLE_HOME/OPatch/opatch lsinventory

 Rollback patches in the order listed in the patch docs under: "Run OPatch Conflict Check"
 For the first one use: rollback -id <InterimPatchNumber>
 For the remainder use: nrollback -local -id <InterimPatchNumber>

 Do for oracle user then grid.
 su - oracle
 $ORACLE_HOME/OPatch/opatch rollback -id 26710464
 $ORACLE_HOME/OPatch/opatch nrollback -local -id 26925644

 Shutdown the cluster.

 su - grid   
 $ORACLE_HOME/OPatch/opatch rollback -id 26928563
 $ORACLE_HOME/OPatch/opatch nrollback -local -id 26839277
 $ORACLE_HOME/OPatch/opatch nrollback -local -id 26737232
 $ORACLE_HOME/OPatch/opatch nrollback -local -id 26925644
 $ORACLE_HOME/OPatch/opatch nrollback -local -id 26710464

If you get any errors about files that cannot be copied or moved unlock the GI Home then retry.

 QC If All Rolled Back
 Run for both grid and oracle user.
 $ORACLE_HOME/OPatch/opatch lsinventory
 There are no Interim patches installed in this Oracle Home.

QC Cluster
If you had to unlock the GI Home then make sure to run postpatch to restore normal status.

 crsctl check cluster -all
 crsctl status res -t -init

If Cluster Not Up Restart Cleanly

 1. Shutdown cleanly if the clusterware is running in the failed node: 
    # crsctl stop crs -f 

    If the above fails because of partially run postpatch and not able to bring down ohasd: 
    # crsctl disable crs 

    Ensure all clusterware related process are down and nothing is running from the crs home: 
    ps -ef|grep ohasd
    # kill -9 <ohasd.bin pid> 

    Enable clusterware again: 
    # crsctl enable crs 

 2. Unlock the CRS home 
    # rootcrs.sh -unlock 

    find /u01/app/12.2.0.1/grid -iname "rootcrs.sh"
    /u01/app/12.2.0.1/grid/crs/install/rootcrs.sh

    cd /u01/app/12.2.0.1/grid/crs/install
    ./rootcrs.sh -unlock 

    ...
    2017/11/16 12:26:54 CLSRSC-347: Successfully unlock /u01/app/12.2.0.1/grid

 3. Issue -postpatch to instantiate and lock the home 
    ./rootcrs.sh -postpatch

QC Cluster

 crsctl check cluster -all
 crsctl status res -t -init

SQL to Check Patches

set serverout on
set long 200000000
set pages 2000
select xmltransform(dbms_qopatch.get_opatch_bugs, dbms_qopatch.get_opatch_xslt) from dual;

Unlock the GI Home

As the root user:

 export GRID_HOME=/u01/app/12.2.0.1/grid
 export PATH=$GRID_HOME/bin:$PATH
 cd /u01/app/12.2.0.1/grid/crs/install
 ./rootcrs.sh -unlock
 ...
 2017/12/13 12:07:00 CLSRSC-347: Successfully unlock /u01/app/12.2.0.1/grid

Then after all rollback operations issue this:

  cd /u01/app/12.2.0.1/grid/crs/install
  ./rootcrs.sh -postpatch

Then check status of cluster.

Stop Cluster

As root user:

 export GRID_HOME=/u01/app/12.2.0.1/grid
 export PATH=$GRID_HOME/bin:$PATH
 cd $GRID_HOME/bin
 crsctl stop cluster -all

APPENDIX

Patch Apply Conflicts Error Example.

Following patches have conflicts. Please contact Oracle Support and get th ...

Patch: /u01/orasw/patches/26737266/26928563
Log: /u01/app/12.2.0.1/grid/cfgtoollogs/opatchauto/core/opatch/opatch2018-01-04_13-54-29PM_1.log
Reason: Failed during Analysis: CheckNApplyReport Failed, [ Prerequisite Status: FAILED, Prerequisite output:
The details are:
Inter-conflict checking failed in apply incoming patches]
Failed during Analysis: CheckConflictAgainstOracleHome Failed, [ Prerequisite Status: FAILED, Prerequisite output:
Summary of Conflict Analysis:

Patches that can be applied now without any conflicts are :
26710464, 26737232, 26839277, 26928563

Following patches have conflicts. Please contact Oracle Support and get th ...

Patch: /u01/orasw/patches/26737266/26710464
Log: /u01/app/12.2.0.1/grid/cfgtoollogs/opatchauto/core/opatch/opatch2018-01-04_13-54-29PM_1.log
Reason: Failed during Analysis: CheckNApplyReport Failed, [ Prerequisite Status: FAILED, Prerequisite output:
The details are:
Inter-conflict checking failed in apply incoming patches]
Failed during Analysis: CheckConflictAgainstOracleHome Failed, [ Prerequisite Status: FAILED, Prerequisite output:
Summary of Conflict Analysis:

Patches that can be applied now without any conflicts are :
26710464, 26737232, 26839277, 26928563

Following patches have conflicts. Please contact Oracle Support and get th ...

After fixing the cause of failure Run opatchauto resume

]
OPATCHAUTO-68061: The orchestration engine failed.
OPATCHAUTO-68061: The orchestration engine failed with return code 1
OPATCHAUTO-68061: Check the log for more details.
OPatchAuto failed.

OPatchauto session completed at Thu Jan  4 13:56:53 2018
Time taken to complete the session 3 minutes, 29 seconds

 opatchauto failed with error code 42

datapatch

Load Modified SQL Files into the Database via datapatch.
From node 1, run the following as the oracle user:

 sqlplus /nolog
 Connect / as sysdba
 startup 
 quit

 cd $ORACLE_HOME/OPatch
 ./datapatch -verbose

Example monitoring log of above: tail -f /u01/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_7794_2017_11_08_07_40_48/sqlpatch_invocation.log

If datapatch is successful it will output something similar to this:

 Validating logfiles...
 Patch 26737266 apply: SUCCESS
 logfile: /u01/app/oracle/.../26737266_apply_ORADB_2017Nov08_07_41_04.log (no errors)
 SQL Patching tool complete on Wed Nov  8 07:42:28 2017

OJVM RU

If the OJVM RU is also installed, you may see invalid objects after execution of datapatch in the previous step. If this is the case, run utlrp.sql to revalidate these objects.

 cd $ORACLE_HOME/rdbms/admin
 sqlplus /nolog
 SQL> CONNECT / AS SYSDBA
 SQL> @utlrp.sql