oracledba.help
SpecialTopics

Linux, Installing and Using NFS

Overview

Network File System (NFS) allows you to share files and folders between Linux systems as if they were mounted locally. What follows is the essential information on how to implement NFS.

Linux implemented NFS can be used for RMAN files, Data Pump files and other peripheral type files. In a production RAC environment it is ideal to have a dedicated NFS Server making all the nodes NFS Clients.

The following scenario shows NFS between two systems. One acting as an NFS Server (lnx01) and one as an NFS Client (lnx02).

While portmapper 111 will remain the same, the others can change each time server is boot or drive remounted! As a result, make sure your firewall infrastructure is not blocking NFS related ports.

Prerequisites

  • On the system to be the NFS Server you have a disk\device that has been presented and can be seen from Linux (df -h).
  • Disable firewalls while implementing. See references if you need to implement NFS on a system with firewalls enabled.

If the NFS Server is not available when the NFS Client boots it can hang the OS if there are NFS entries in the /etc/fstab.

You may find this method more optimal rather than making /etc/fstab entries on the client.

Install NFS

 -- Install NFS Packages
 yum install nfs-utils nfs-utils-lib -y

 -- Validate Installed
 systemctl list-unit-files|grep nfs

 -- Start Required Services
 service rpcbind start
 service nfs start

 -- Ensure Services Start on Boot
 chkconfig --levels 235 rpcbind on
 chkconfig --levels 235 nfs on

Above only required on your NFS Server, i.e. it is not required for an NFS client system.

Configure NFS Server

 -- QC: Ensure path to resource to share exists (Ex: sdb1 mounted as /u01).
 lsblk

 sdb       8:16   0   2G  0 disk 
 |--sdb1   8:17   0   2G  0 part /u01

 -- QC: Ensure you can write to path to export.
 touch /u01/test.txt
 ls -l /u01/test.txt

 -- Configure exports File
 vi /etc/exports
 /u01 *(rw,sync,no_root_squash)

 -- Export Path
 exportfs -a <== Exports all shares listed in /etc/exports file.

 -- Confirm Exported
 showmount -e
   Export list for lnx01:
   /u01 *

To bounce NFS service: service nfs restart

Configure NFS Client

Options:

 Standard: -o rw,soft
 Oracle System: -o rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,actimeo=0,vers=3,timeo=600

Per Oracle 359515.1.

Example Session

 -- Confirm Client Can See NFS Exported Path
 showmount -e 192.168.56.101
 Export list for 192.168.56.101:
 /u01 *

 -- Mount Shared NFS Path
 cd /
 mkdir /u03
 mount -t nfs -o rw,soft 192.168.56.101:/u01 /u03

 -- QC: Ensure you can see and interact with NFS Resource
 ls -l /u03
 vi /u03/test.txt
 cat /u03/test.txt
 ls -l /u03/test.txt

 -- QC: More Exhaustive Tests Via dd Command
 dd if=/dev/zero of=/u03/test.img bs=4k iflag=fullblock,count_bytes count=25M
 dd if=/dev/zero of=/u03/test.img bs=4k iflag=fullblock,count_bytes count=256M
 dd if=/dev/zero of=/u03/test.img bs=4k iflag=fullblock,count_bytes count=1G

 -- Permanently Mount *
 vi /etc/fstab
 192.168.56.101:/u01 /u03 nfs defaults 0 0

* The /etc/fstab method may not be ideal for everyone (shown here for completeness). Review this page for other options.

Unconfigure NFS

/u03 is the mount point used in these examples.

Unconfigure NFS on Client

 -- Unmount Volume
 umount -f /u03
 OR
 umount -i /u03

 -- Remove Entry from fstab file.
 vi /etc/fstab
 #192.168.56.71:/u01 /u03 nfs defaults 0 0

 -- Check to Ensure /u03 Gone 
 df -h

Unconfigure NFS on Server

Stop Exporting Just /u03

 -- In exports file comment out /u03 Entry
 vi /etc/exports
 #/u03 *(rw,sync,no_root_squash)

 -- Refresh changes to exports file.
 exportfs -r
 exportfs -e

Stop NFS Service

 service nfs stop  
 chkconfig --levels 235 nfs off
 service nfs status

NFS Commands

showmount -eShows the available shares on your local machine.
showmount -e <ip|hostname>Lists the available shares at the remote server.
showmount -dLists all the sub directories.
exportfs -vDisplays a list of shares files and options on a server.
exportfs -aExports all shares listed in /etc/exports.
exportfs -uUnexports all shares listed in /etc/exports.
exportfs -rRefresh the server’s list after modifying /etc/exports

Common /etc/exports File Options

roRead Only access.
rwRead and Write access.
syncConfirms requests to directory only once the changes have been committed.
no_subtree_checkPrevents the subtree checking. Increases the reliability but reduces security.
no_root_squashAllows root to connect to the designated directory.

Example Entries

 /u03 *(rw,sync,no_root_squash)
 /iso *(ro,sync)

All Options via NFS Client: mountstats <NFS Mount Point>

rw,vers=4.1,rsize=524288,wsize=524288,namlen=255,acregmin=3,acregmax=60,
acdirmin=30,acdirmax=60,hard,proto=tcp,port=0,timeo=600,retrans=2,
sec=sys,clientaddr=192.168.56.72,local_lock=none

Removing the NFS Mount from Client

To unmount a shared directory from the NFS Client: umount -f /mnt/<mount_point>
umount -f /ztest OR umount -i /ztest

Status Checks

The following commands can all show NFS\disk performance from various points of view.

Server

 -- rpcbind Service Status
 service rpcbind status

 -- NFS Service Status
 service nfs status

 -- Available Shares on Local Machine
 showmount -e

 -- NFS Processes 
 ps aux | grep nfs

NFS Processes (RPCNFSDCOUNT) and other values can be adjusted here: /etc/sysconfig/nfs

Client

 -- RPC and NFS Info
 rpcinfo -t <NfsSrvIp> nfs 4
 program 100003 version 4 ready and waiting

 -- Available shares on your NFS Server.
 showmount -e <NfsSrvIp>
 Export list for 192.168.56.101:
 /u01 *

 -- NFS IO Stats
 nfsiostat 5
 op/s  rpc bklog
 0.06  0.00
 read:  ops/s  kB/s   kB/op    retrans    avg RTT (ms)  avg exe (ms)
 	0.001  0.000  0.309    0 (0.0%)   7.000         7.000
 write: ops/s   kB/s   kB/op   retrans   avg RTT (ms)   avg exe (ms)
	0.001   0.004  3.060   0 (0.0%)  1.667          2.000

 -- List of NFS Mounts
 nfsstat -m
 /u03 from 192.168.56.101:/u01 Flags: rw,relatime,vers=4.1,rsize=524288,wsize=524288,
 namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=192.168.56.102,
 local_lock=none,addr=192.168.56.101

 -- Net Info
 nmap <NfsSrvIp>
 Starting Nmap 6.40 ( http://nmap.org ) at 2018-10-16 14:39 EDT
 Nmap scan report for lnx01 (192.168.56.101)
 Host is up (0.000088s latency).
 Not shown: 996 closed ports
 PORT     STATE SERVICE
 22/tcp   open  ssh
 53/tcp   open  domain
 111/tcp  open  rpcbind
 2049/tcp open  nfs
 MAC Address: 08:00:27:AA:8E:D1 (Cadmus Computer Systems)
 Nmap done: 1 IP address (1 host up) scanned in 0.15 seconds

 -- NFS Mount Point Stats
 mountstats <NFS Mount Point>
 mountstats /u03

Both

 -- Disk Performance
 mpstat 1 10

 -- CPU and I\O Stats
 iostat

 -- RPC statistics
 nfsstat -r

APPENDIX

Prepping Two Virtualbox Systems

  1. Create a base Linux 7.x vbox that can access the Internet to use yum.
  2. Make two clones as: lnx01 and lnx02:
    192.168.56.71 lnx01.localdomain lnx01
    192.168.56.72 lnx02.localdomain lnx02
  3. Set hostnames and make /etc/hosts entries.
  4. Other Changes
 - Set Interface from GUI (enp0s8)
   Applications -> System Tools -> Settings -> Network

   IPv4 -> Address: Manual
           Address: 192.168.56.71

 - vi /etc/resolv.conf
   nameserver 100.1.1.84
   nameserver 100.1.1.51
   nameserver 192.168.56.71

 - dnsmasq
   yum install dnsmasq -y
   mv /etc/dnsmasq.conf /etc/dnsmasq.conf.orig

   vi /etc/dnsmasq.conf
     expand-hosts
     local=/localdomain/
     listen-address=127.0.0.1
     listen-address=192.168.56.71
     bind-interfaces

     service dnsmasq restart
  • Change above .71 entries on lnx02 to .72 correspondingly.
  • This example scenario uses dnsmasq for emulating DNS.

Oracle Actions

SQL> CREATE OR REPLACE DIRECTORY u03 AS '/u03/exports';
SQL> GRANT read,write ON DIRECTORY u03 TO system;

Script for NFS Client

You can use this to more safely boot and mount an NFS resource. Change as required for your environment.

1. Comment out NFS Server entry in /etc/fstab File.

 #192.168.56.71:/u03 /u03 nfs defaults 0 0

2. Create Script

#!/bin/bash
# File: initNFS.sh (Mounts NFS directory).
# Ver:  2019.02
# To unmount: umount -f <LocalDir>  Example: umount -f /u03

# User Vars
sNfsSrv_IP="192.168.1.42";         # <NfsSrvIp>
sNfsSrv_ExpStr="$sNfsSrv_IP:/u01"; # <NfsSrvIp:/NfsSrvMountedDrv> Ex: sNfsSrv_IP:/u01
sNfsOptions="-o rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,actimeo=0,vers=3,timeo=600";
# Oracle System: "-o rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,actimeo=0,vers=3,timeo=600"
# Std: "-o rw,soft" 
sLocalDir="/u03"                   # Local dir to create NFS mount point. Ex: /u03
nWait=120;                         # How many seconds to wait for NFS Service to start.

# Start
nExit=1;
printf "$(basename ""$0""): Started $(date "+%Y-%m-%d %H:%M:%S")\n"
printf "Waiting $nWait seconds...\n"
sleep $nWait

# Can We Reach the NFS Server?
ping -q -c 3 $sNfsSrv_IP > /dev/null 2>&1
if [[ $? -ne 0 ]]; then
   printf "NFS Server: Cannot Reach $sNfsSrv_IP\n"
else
   printf "NFS Server: OK\n"

   # Can We Mount NFS Path?
   mount -t nfs $sNfsOptions $sNfsSrv_ExpStr $sLocalDir
   if [[ $? -ne 0 ]]; then
      printf "NFS Mount Status: Could Not Mount $sNfsSrv_ExpStr\n"
   else
      printf "NFS Mount Status: $sLocalDir Mounted OK\n"
      nExit=0;
   fi
fi

# End
printf "$(basename ""$0""): Ended $(date "+%Y-%m-%d %H:%M:%S")\n"
exit $nExit

Though I have created MUCH more complex versions of this script, this version works really well in most cases.

3. Create cron Entry

Make the following cron entry for the root user.

 @reboot /u01/app/scripts/onBoot/initNFS.sh > /u01/app/scripts/out/initNFS.out

Common Errors

ORA-27054

NFS file system where the file is created or resides is not mounted with correct options.

Non-RAC System

Per Doc ID 781349.1, mount using these options:

 -o rw,bg,hard,rsize=32768,wsize=32768,vers=3,forcedirectio,nointr,proto=tcp,suid host:/path /path

Order of options important.

RAC Node System

Per 359515.1, mount using these options:

 mount -t nfs -o rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,actimeo=0,vers=3,timeo=600 host:/path /path

Order of options important.

df -h Hangs

This can happen after NFS is first installed or if the NFS Server is offline.

Run below command to check if it hangs or not:

 ls -l /proc/sys/fs/binfmt_misc

If above hangs, restart the binfmt mount (AKA binfmt filesystem) to clear hang:

 systemctl restart proc-sys-fs-binfmt_misc.mount

If above still does not allow df -h type commands and NFS server is OK\online, then unmount and remount NFS resource (as soft). Example below:

 umount -f /u03
 mount -t nfs -o rw,soft 192.168.56.71:/u01 /u03

Further Research and Options

Per Redhat: The NFS Client mounts the NFS export using the hard option by default. This results in NFS RPC requests being retried indefinitely. A side effect of hard-mounted NFS file system is that processes block (or "hang") in a high-priority disk with waiting I/O state until their NFS RPC reply is received from the NFS Server. Hence, when an NFS Server goes down, the NFS mounts on the NFS Client will hang until the NFS Server recovers.

Soft Option

The soft NFS mount option allows NFS RPC requests to eventually timeout and return an I/O error to the calling application.

 mount -t nfs -o rw,soft 192.168.56.71:/u01 /u03

The docs say since write operations that return I/O errors will not be retried indefinitely there is a risk of data corruption when using the soft option.

Soft Option NFS Client System Findings

In tests with the soft option:

  • df -h still hangs.
  • With other commands though (like ls -l) they actually timed out in 5 minutes (not 10) verse the default timeo=600. That is still a fair amount of time thus the real chance of corruption may not be significant.

Example Timeout With Soft Option Used

 [root@lnx02 /]# ls -l /u03
 ls: cannot access /u03: Input/output error

Hard Option NFS Client System Findings

  • If the hard option is used it will hang indefinitely (Ctrl-C will not work either).
  • When an offline NFS server is back online the same command on the client was still hung but new commands worked OK.
  • With NFS Server back online, in unmounting and remounting the NFS resource worked normally again.

RPC: Port Mapper Failure

Message

 showmount -e MyHost 
clnt_create: RPC: Port mapper failure - Unable to receive: errno 113 (No route to host)

Solution

  1. Ensure there are no duplicate IP's on your network.
  2. Disable Firewalls
  3. If Firewall cannot be disabled allow RPC.

Disable Firewall

 systemctl disable firewalld
 systemctl stop firewalld
 service iptables stop
 chkconfig iptables off

No Route to Host

Message

 showmount -e MyHost
 clnt_create: RPC: Port mapper failure - Unable to receive: errno 113 (No route to host)

Solution

  1. Ensure there are no duplicate IP's on your network.
  2. Disable Firewalls
  3. If Firewall cannot be disabled allow RPC.

Disable Firewall

 systemctl disable firewalld
 systemctl stop firewalld
 service iptables stop
 chkconfig iptables off

journal: g_dbus_interface_skeleton_unexport: assertion 'interface_->priv->connections != NULL' failed

This message is seen in /var/log/messages

Per Redhat Support:

 Resolution
 The message itself appears to be harmless and doesn't appear to affect the  
 operation of the system. To solve this issue, update the accountsservice 
 package to accountsservice-0.6.50-2.el7.

 Raw
 1. Update accountsservice package
    accountsservice-0.6.50-2.el7
    accountsservice-libs-0.6.50-2.el7
 2. Restart accounts-daemon
    # systemctl restart accounts-daemon
 3. Confirm if the reported message output after updating accountsservice package

Common Client Tasks

Init Mount Point

 umount -f /u03
 exportfs -u
 showmount -e 192.168.56.71

Mount

 mount -t nfs 192.168.56.71:/u01 /u03

QC

 mount | grep nfs
 df -h

Privs

 chown oracle:oinstall /u03/exports
 chown oracle:oinstall /u03/rman

Further Info

On the client run "umount -l /mnt/public" to do a lazy unmount of the filesystem. This typically clears issues including hung processes that had been attempting to traverse the mount.

 umount -l /u03

Cheat Sheet

 -- Srv 192.168.56.73
 cd /
 mkdir /u01
 touch /u01/test.txt

 vi /etc/exports
 /u01 *(rw,sync,no_root_squash)

 exportfs -a
 showmount -e

 -- Client
 ping 192.168.56.73
 showmount -e 192.168.56.73

 cd /
 mkdir /u03
 mount -t nfs -o rw,soft 192.168.56.73:/u01 /u03
 OR 
 mount -t nfs -o rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,actimeo=0,vers=3,timeo=600 192.168.56.73:/u01 /u03

NFS\RPC Port Usage

nfsclient> rpcinfo -p

   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100024    1   udp  48025  status
    100024    1   tcp  35852  status
    100021    1   udp  40450  nlockmgr
    100021    3   udp  40450  nlockmgr
    100021    4   udp  40450  nlockmgr
    100021    1   tcp  10093  nlockmgr
    100021    3   tcp  10093  nlockmgr
    100021    4   tcp  10093  nlockmgr

While portmapper 111 will remain the same, the others can change each time server is boot or drive remounted! As a result, make sure your firewall infrastructure is not blocking NFS related ports.

Related Oracle Doc IDs:

  • Data Pump Export Hang when Exporting to a NAS / NFS Mount (Doc ID 2607464.1)
  • Data Pump Hanging When Exporting To NFS Location (Doc ID 434508.1),
    A firewall blocking ports used by the NFS server can also cause issues.