• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer navigation

The Geek Diary

  • OS
    • Linux
    • CentOS/RHEL
    • Solaris
    • Oracle Linux
    • VCS
  • Interview Questions
  • Database
    • oracle
    • oracle 12c
    • ASM
    • mysql
    • MariaDB
  • DevOps
    • Docker
    • Shell Scripting
  • Big Data
    • Hadoop
    • Cloudera
    • Hortonworks HDP

How to Backup and Restore ZFS root pool in Solaris 10

by admin

The following procedure can be used to backup and restore a ZFS root pool (rpool) using the tools that are provided in Solaris 10 and above. It is advised that the system admin becomes comfortable with this procedure and attempts a restore before deploying this into production environments.

In this procedure, it is assumed the root pool will be called ‘rpool’ as is the given standard during installation. We will also use below sample filesystems as the part of rpoo.

rpool
rpool/ROOT
rpool/ROOT/s10u7
rpool/export
rpool/export/home

This may need to be adjusted depending upon the filesystems that were created as part of the Solaris installation. Furthermore, it does not take into account any Live Upgrade created boot environments or cloned filesystems. The decision has been made to send each filesystem individually, where possible in the case just one filesystem needs to be restored from a stream file.

Backing up a Solaris ZFS root pool

1. Take a copy of the properties that are set in the rpool and all filesystems, plus volumes that are associated with it:

# zpool get all rpool
# zfs get -rHpe all rpool

2. Save this data for reference in case it is required later on. Now snapshot the rpool with a suitable snapshot name:

# zfs snapshot -r rpool@backup

This will create a recursive snapshot of all descendants, including rpool/export, rpool/export/home as well as rpool/dump and rpool/swap (volumes).

3. swap and dump are not required to be included in the backup, so they should be destroyed thus:

# zfs destroy rpool/swap@backup
# zfs destroy rpool/dump@backup

4. Then for each dataset, send the data to a backup file/location. Make sure that there is sufficient capacity in your backup location as “zfs send” will fail if the destination becomes full (eg: a multi-volume tape). In this example /backup is an NFS mounted filesystem from a suitably capacious server:

# zfs send -v rpool@backup > /backup/rpool.dump
# zfs send -v rpool/ROOT@backup > /backup/rpool.ROOT.dump
# zfs send -vR rpool/ROOT/s10u7@backup > /backup/rpool.ROOT.s10u7.dump  
# zfs send -v rpool/export@backup > /backup/rpool.export.dump
# zfs send -v rpool/export/home@backup > /backup/rpool.export.home.dump

These dump files can then be archived onto non-volatile storage for safe keeping, eg: magnetic tape.

Restoring a Solaris ZFS root pool

If it is necessary to rebuild/restore a root pool, locate the known good copies of the zfs streams that were created in the Backing Up section of this post and make sure these are readily available. In order to restore the root pool, first boot from a Solaris 10 DVD or network (jumpstart) into single user mode. Depending upon whether the booted root filesystem is writable, it may be necessary to tell ZFS to use a temporary location for the mountpoint.

Pre-Requisites

A root pool at the time of writing must:

  • Live on a disk with an SMI disk label
  • Be composed of a single slice, not an entire disk (USE the cXtYdZs0 syntax and NOT cXtYdZ which would use the entire disk and EFI label)
  • Ensure that the pool is created with the same version as the original rpool. You can find the pool version from the zpool upgrade version output. Use “-o version=[value]” in the zpool create command.

1. In this example, disk c3t1d0s0 contains an SMI label, where slice 0 is using the entire capacity of the disk. Change the controller and target numbers accordingly.

# zpool create -fo altroot=/var/tmp/rpool -o cachefile=/etc/zfs/zpool.cache -m legacy rpool c3t1d0s0

2. Ensure the dump files are available for reading. If these exist on tape, then a possible location would be /dev/rmt/0n, however in this example the dump files are made available by mounting up the backup filesystem from an NFS server.

3. Once the dump files are available, restore the filesystems that make up the root pool. It is important to restore these in the correct hierarchical order:

# zfs receive -Fd rpool < /backup/rpool.dump
# zfs receive -Fd rpool < /backup/rpool.ROOT.dump
# zfs receive -Fd rpool < /backup/rpool.ROOT.s10u7.dump
# zfs receive -Fd rpool < /backup/rpool.export.dump
# zfs receive -Fd rpool < /backup/rpool.export.home.dump

This will restore the filesystems, but remember an rpool will also have a dump and swap device, made up from zvol's. You need to manually create the swap and dump. Adjust the size of the dump and swap volumes according to the configuration of the system being restored

4. Now to make the disk ZFS bootable, a boot block must be installed in SPARC, and the correct GRUB in x86, so pick one of the following according to the platform being restored:

# installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c3t1d0s0    (SPARC)
# installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c3t1d0s0                   (x86)

5. Once the boot block has been installed, it is then necessary to set the bootable dataset/filesystem within this rpool. To do this run the following zpool commands (Makes the rpool/ROOT/s10u7 the bootable dataset):

# zpool set bootfs=rpool/ROOT/s10u7 rpool

6. Set the failmode property of the rpool to continue. This differs to "data" zfs pools which use wait by default, so it's important to set this correctly:

# zpool set failmode=continue rpool

7. Check to see if canmount is set to noauto, if not set:

# zfs get canmount rpool/ROOT/s10u7
# zfs set canmount=noauto rpool/ROOT/s10u7

8. Temporarily disable the canmount property for the following filesystems, to prevent these from mounting when it comes to setting the mountpoint property:

# zfs set canmount=noauto rpool
# zfs set canmount=noauto rpool/export

9. Set the mountpoint properties for the various filesystems:

# zfs set mountpoint=/ rpool/ROOT/s10u7
# zfs set mountpoint=/rpool rpool

10. Set the dump device correctly:

# dumpadm -d /dev/zvol/dsk/rpool/dump

Filed Under: Solaris, ZFS

Some more articles you might also be interested in …

  1. Solaris Zone Install Fails With Cpio Error
  2. How to change hostname in Solaris 8, 9 and 10
  3. How to create an OBP boot device alias in Solaris [SPARC]
  4. How To Use ‘zpool split’ to Split rpool in solaris 11 (x86/x64)
  5. Creating network interface alias for network boot or jumpstart installation at OBP
  6. 17 Examples of using Solaris boot command
  7. 12 iostat examples for Solaris performance troubleshooting
  8. How to configure a vnic on top of a Vlan Tagged Interface and assign the vnic to a Solaris 11 Zone
  9. How to set OBP Variables from the ALOM/ILOM
  10. Unix file basics : Inode, Soft Vs Hard link, Device files, Named pipes

You May Also Like

Primary Sidebar

Recent Posts

  • pw-cat Command Examples in Linux
  • pvs: command not found
  • pulseaudio: command not found
  • pulseaudio Command Examples in Linux

© 2023 · The Geek Diary

  • Archives
  • Contact Us
  • Copyright