oracledba.help works great on your mobile device too!

oracledba.help

RAC Administration

<- System

Invoke installer: $GRID_HOME/crs/config/config.sh

TOC

ASM

--Display

  • Space Usage
    • SELECT name, free_mb, total_mb, free_mb/total_mb*100 as percentage
      FROM v$asm_diskgroup;
    • grid> asmcmd -p lsdg
  • ps -ef|grep ASM
  • srvctl config asm
  • srvctl status asm
  • srvctl status asm -n [NodeName]
  • ASM Administration

Cluster

--Display

Name:cemutlo -n
OCR:$ORACLE_HOME/bin/ocrcheck
Servicescrsctl status resource
Services (On Node)crsctl status res -t -init
Services (deprecated)crs_stat -t
Status - All Nodes:crsctl check cluster -all
Status - Current Node:crsctl check crs
VIP:crs_stat -t |grep vip
srvctl status nodeapps
srvctl config nodeapps

--Start|Stop
You may need to run these as root. Example:

export GRID_HOME=/u01/app/12.2.0.1/grid
export PATH=$GRID_HOME/bin:$PATH
cd $GRID_HOME/bin
  • crsctl stop cluster -all

FORCE

  • crsctl stop crs -f
  • crsctl start cluster

Database Instances

--Display

  • srvctl config database -d [DBName]
  • srvctl status database -d [DBName]
  • SELECT name FROM v$database;
  • SELECT inst_name FROM v$active_instances;
  • SELECT instance_name,status,database_status,active_state
    FROM gv$instance ORDER BY instance_name;

--Start|Stop
Run these as the oracle user:

All Instances

  • srvctl start database -d MyDBName
  • srvctl stop database -d MyDBName

Single Instance

  • srvctl start instance -d MyDBName -i MyInstName
  • srvctl stop instance -d MyDBName -i MyInstName

FORCE

  • srvctl stop instance -d MyDBName -i MyInstName -f

Listener

--Display

  • ps -ef | grep lsnr | grep -v 'grep' | grep -v 'ocfs' | awk '{print $9}'
  • lsnrctl status
  • srvctl status scan_listener
  • srvctl status listener

--Start|Stop
srvctl [start|stop] listener -n [NodeName]

srvctl start listener -n rac02

Logs

  • Cluster: $ORACLE_BASE/diag/crs/MyHostname/crs/trace/alert.log
  • Database: $ORACLE_BASE/diag/rdbms/MySID/MySID/trace/alert_MySID.log
  • Listener: $ORACLE_BASE/diag/tnslsnr/MySID/listener/alert/log.xml
  • Tools: /u01/app/<gi_version>/grid/cfgtoollogs

NTP

--Display Status
cluvfy comp clocksync -n all -verbose
--Install\Configure
Configure NTP.

OCR

-- Location
cat /etc/oracle/ocr.loc

-- Status
grid> crsctl query css votedisk

Server Pool

--Display

  • srvctl config srvpool

Sessions

-- Instance Session is Logged Into SELECT host_name FROM gv$instance WHERE instance_number=userenv('instance');

-- Instances All Sessions Logged Into

SELECT i.host_name, s.username 
FROM gv$session s 
JOIN gv$instance i on (i.inst_id=s.inst_id)
WHERE username is not null;

Services - Display

  • systemctl

VIP - Display

  • crs_stat -t |grep vip
  • srvctl status scan

APPENDIX

Aliases

Add to .bashrc on each node. Change as needed.

# Aliases - Common
alias cl='crontab -l'
alias l9='ls -alt | head -9' 
alias l20='ls -alt | head -20'
alias l50='ls -alt | head -50'
alias tf='date;ls -l|wc -l'

# DBA Aliases
alias cdbin='cd $ORACLE_HOME/bin;pwd'
alias cdtrace='cd $ORACLE_BASE/diag/rdbms/$ORACLE_SID/$ORACLE_SID\_1/trace;pwd'
alias cdnet='cd $ORACLE_HOME/network/admin;pwd'

alias alog='tail -f $ORACLE_BASE/diag/rdbms/$ORACLE_SID/$ORACLE_SID\_1/trace/alert_$ORACLE_SID\_1.log'
alias clog='tail -f /u01/app/grid/diag/crs/$HOSTNAME/crs/trace/alert.log'
alias llog='tail -f /u01/app/grid/diag/tnslsnr/$HOSTNAME/listener/alert/log.xml'

alias sqp='sqlplus sys/mypw@DB1 as sysdba'
alias rmn='$ORACLE_HOME/bin/rman target / nocatalog'

SQLPLUS Usage

Get Service Name

cat $ORACLE_HOME/network/admin/tnsnames.ora

DB1 =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = rac-scan)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = DB1)
    )
  )

Connect

export ORACLE_SID=DB1
sqlplus sys/MyPassword@DB1 as sysdba

SQL> SELECT inst_name FROM v$active_instances;

INST_NAME
------------------------------------------------------------
rac02:DB1_1
rac01:DB1_2

Quick RAC Status Checks

As grid user:

Cluster (brief):          crsctl check cluster -all
Cluster Services on Node: crsctl status res -t -init
Nodes:                    olsnodes
ASM:                      srvctl status asm
Listener: ps -ef | grep lsnr | grep -v 'grep' | grep -v 'ocfs' | awk '{print $9}'
Cluster Verify:           cluvfy stage -post crsinst -n n1,n2 > /tmp/cluvfy.out

As oracle user:

srvctl status database -d [DBName]
SQL> SELECT inst_name FROM v$active_instances;
SQL> SELECT instance_name,status,database_status,active_state 
     FROM gv$instance ORDER BY instance_name;

Assumes you have installed the database product and created a database.

Cluster Verification Script

Usage example: /u01/orasw/grid/runcluvfy.sh stage -pre crsinst -n rac01,rac02 -fixup -verbose

Database Express

https://<your_rac-scan OR localhost>:5500/em


Shutdown Log Snippets

Issued from rac01: srvctl stop database -d db1

rac01

--Alert Log

ARCH: Archival disabled due to shutdown: 1089
Shutting down archive processes
Archiving is disabled
Mon Nov 21 16:01:57 2016
NOTE: force a map free for map id 2
Mon Nov 21 16:01:58 2016
Stopping background process VKTM
Mon Nov 21 16:01:58 2016
ARCH: Archival disabled due to shutdown: 1089
Shutting down archive processes
Archiving is disabled
Mon Nov 21 16:01:58 2016
NOTE: Shutting down MARK background process
Mon Nov 21 16:02:01 2016
NOTE: ASMB releasing group resources due to shutdown
NOTE: ASMB clearing idle groups before exit
Stopping background process RBAL
Mon Nov 21 16:02:02 2016
freeing rdom 0
Mon Nov 21 16:02:04 2016
Instance shutdown complete

--Cluster Log

2016-11-21 10:57:28.172 [OCSSD(2555)]CRS-1601: CSSD Reconfiguration complete. Active nodes are rac01 rac02 .
2016-11-21 10:58:03.740 [CRSD(5177)]CRS-2772: Server 'rac02' has been assigned to pool 'ora.db_pool'.
2016-11-21 11:08:20.732 [CRSD(5177)]CRS-2757: Command 'Start' timed out waiting for response from the resource 'ora.db1.db'. Details at (:CRSPE00163:) {2:8614:2} in /u01/app/grid/diag/crs/rac01/crs/trace/crsd.trc.

rac02

--Alert Log

Mon Nov 21 16:01:08 2016
minact-scn: Inst 2 is now the master inc#:4 mmon proc-id:3099 status:0x7
minact-scn status: grec-scn:0x0000.00000000 gmin-scn:0x0000.000b60c5 gcalc-scn:0x0000.000b60c5
minact-scn: Master returning as live inst:1 has inc# mismatch instinc:0 cur:4 errcnt:0
Mon Nov 21 16:02:03 2016
Reconfiguration started (old inc 4, new inc 6)
List of instances (total 1) :
 2
Dead instances (total 1) :
 1
My inst 2   
publish big name space -  dead or down/up instance detected, invalidate domain 0 
 Global Resource Directory frozen
 * dead instance detected - domain 0 invalid = TRUE 
 Communication channels reestablished
 Master broadcasted resource hash value bitmaps
 Non-local Process blocks cleaned out
Mon Nov 21 16:02:03 2016
 LMS 0: 0 GCS shadows cancelled, 0 closed, 0 Xw survived
 Set master node info 
 Submitted all remote-enqueue requests
 Dwn-cvts replayed, VALBLKs dubious
 All grantable enqueues granted
Mon Nov 21 16:02:03 2016
minact-scn: master found reconf/inst-rec before recscn scan old-inc#:4 new-inc#:4
Mon Nov 21 16:02:03 2016
 Post SMON to start 1st pass IR
Mon Nov 21 16:02:03 2016
Instance recovery: looking for dead threads
Mon Nov 21 16:02:03 2016
 Submitted all GCS remote-cache requests
 Fix write in gcs resources
Mon Nov 21 16:02:03 2016
Reconfiguration complete (total time 0.1 secs) 
Mon Nov 21 16:02:03 2016
Instance recovery: lock domain invalid but no dead threads
* validated domain 0, flags = 0x0
Mon Nov 21 16:02:06 2016
minact-scn: master continuing after IR
Mon Nov 21 16:03:03 2016
Decreasing number of real time LMS from 1 to 0

--Cluster Log

TBD

SCAN IP

The IP addresses must be on the same subnet as your default public network in the cluster. The name must be 15 characters or less in length, not including the domain, and it must be resolvable without the domain suffix (for example: “sales1-scan’ must be resolvable as opposed to “scan1-can.example.com”). The IPs must not be assigned to a network interface, since Oracle Clusterware will take care of it.