The primary configuration for the NFS server is the /etc/exports file. This is the file that you use to specify what directories you want to share with the NFS clients. The syntax of this file is:
The value of Directory should be replaced with the name of the directory you want to share (for example, /usr/share/doc). The value hostname should be a client hostname that can be resolved into an IP address. The options value is used to specify how the resource should be shared.
For example, the following entry in the /etc/exports file would share the /usr/share/doc directory with the NFS client client01 (with the options of read-write) and the NFS client client02 (with the option of read-only):
# vi /etc/exports /usr/share/doc client01(rw) client02(ro)
Note that there is a space between the client01 and client02 name/options, but no space between the hostname and its corresponding option. A common mistake of novice administrators is to provide an entry like the following:
/usr/share/doc client01 (rw)
The previous line would share the /usr/share/doc directory with the client01 host with the default options, and all other hosts would have access to this share as read-write.
When you’re specifying a hostname in the /etc/exports file, the following methods are permitted:
- hostname: A hostname that can be resolved into an IP address.
- netgroup: An NIS netgroup using the designation of @groupname.
- domain: A domain name using wildcards. For example, *.onecoursesource.com would include any machine in the onecoursesource.com domain.
- Network: A network defined by IP addresses using either VLSM (Variable Length Subnet Mask) or CIDR (Classless Inter-Domain Routing). Examples: 192.168.1.0/255.255.255.0 and 192.168.1.0/24.
There are many different NFS sharing options, including these:
- rw: Share as read-write. Keep in mind that normal Linux permissions still apply. (Note that this is a default option.)
- ro: Share as read-only.
- sync: File data changes are made to disk immediately, which has an impact on performance, but is less likely to result in data loss. On some distributions this is the default.
- async: The opposite of sync; file data changes are made initially to memory. This speeds up performance but is more likely to result in data loss. On some distributions this is the default.
- root_squash: Map the root user and group account from the NFS client to the anonymous accounts, typically either the nobody account or the nfsnobody account. See the next section, “User ID Mapping,” for more details. (Note that this is a default option.)
- no_root_squash: Map the root user and group account from the NFS client to the local root and group accounts.
User ID Mapping
To make the process of sharing resources from the NFS server to the NFS client as transparent as possible, make sure that the same UIDs (user IDs) and GIDs (group IDs) are used on both systems.
NFS Server Commands
The exportfs command can be used on the NFS server to display what is currently shared:
# exportfs /share [world]
The exportfs command can also be used to temporarily share a resource, assuming the NFS services have already been started:
# exportfs -o ro 192.168.1.100:/usr/share/doc # exportfs /usr/share/doc 192.168.1.100 /share [world]
The -o option is used to specify the share options. The argument includes the name of the systems to share with, as well as the directory to share, separated by a colon (:) character.
If you make changes to the /etc/exports file, any newly added share will be enabled after a reboot. If you want to enable these changes immediately, execute the following command:
# exportfs –a
The nfsstat command can display useful NFS information. For example, the following command displays what is currently mounted by NFS clients:
# nfsstat -m /access from 10.0.2.15:/share Flags: rw,vers=3,rsize=131072,wsize=131072,hard,proto=tcp,timeo=600, retrans=2,sec=sys,addr=10.0.2.15
The showmount command displays similar information:
# showmount -a All mount points on onecoursesource.localdomain: 10.0.2.15:/share
Configuring an NFS Client
Mounting an NFS share is not much different from mounting a partition or logical volume. First create a regular directory:
# mkdir /access
Next, use the mount command to mount the NFS share:
# mount 192.168.1.22:/share /access
You can verify the mount was successful either by executing the mount command or by viewing the /proc/mounts file. The advantage of viewing the /proc/mounts file is that it provides more detail:
# mount | tail -1 192.168.1.22:/share on /access type nfs (rw,addr=192.168.1.22)
# tail -1 /proc/mounts 192.168.1.22:/share /access nfs rw,relatime,vers=3,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp, timeo=600,retrans=2,sec=sys,mountaddr=192.168.1.22,mountvers=3, mountport=772,mountproto=udp,local_lock=none,addr=192.168.1.22 0 0
If the NFS client was rebooted, this mount would not be reestablished after the system boots. To make this a persistent mount across reboots, add an entry like the following in the /etc/fstab file:
# tail -1 /etc/fstab 192.168.1.22:/share /access nfs defaults 0 0
After adding this entry to the /etc/fstab file, unmount the NFS share (if necessary) and test the new entry by only providing the mount point when executing the mount command:
# umount /access # mount /access # mount | tail -1 192.168.1.22:/share on /access type nfs (rw,addr=192.168.1.22)