I've had enough real life experience with replacing drives in the ZFS pool in my home NAS that I feel comfortable sharing this information with the community.

A warning though:

You really shouldn't trust anything I'm about to write. At least, not on its own. If you're using ZFS it's probably because you care about your data. You therefore should not trust a stranger on the Intertoobs to tell you how to repair your storage pool, let alone a storage pool that's degraded and at heightened risk of failure!

Your authoritative source for information on replacing drives should be the Solaris ZFS Administration Guide and the zpool(1m) manpage:

Background

My ZFS pool is a set of striped mirrors that looks like this:

NAME                       STATE     READ WRITE CKSUM
    data                       ONLINE       0     0     0
      mirror-0                 ONLINE       0     0     0
        c1t5000C5004D27DEB6d0  ONLINE       0     0     0
        c1t5000C5004B29E616d0  ONLINE       0     0     0
      mirror-1                 ONLINE       0     0     0
        c2t2d0                 ONLINE       0     0     0
        c2t3d0                 ONLINE       0     0     0
      mirror-2                 ONLINE       0     0     0
        c1t5000C5000D50F28Ed0  ONLINE       0     0     0
        c1t5000C5003EF0002Fd0  ONLINE       0     0     0
      mirror-3                 ONLINE       0     0     0
        c1t5000C50027A36A3Ad0  ONLINE       0     0     0
        c1t5000C5004E5B4C35d0  ONLINE       0     0     0

The instructions below are possibly only relevant for repairing mirror vdevs. I have not done any testing or research to measure their effect on raidzX vdevs.

My NAS also has unoccupied drive bays so I can insert replacement drives into the system without removing a drive first.

Scenario #1 - Drive Failing But Still Useable

In this scenario there's a drive that's failing but is still part of the pool and ZFS can still read data from it. Maybe the drive is throwing SMART errors or ZFS is counting errors against the drive but for whatever reason, it needs to be replaced.

  1. Insert new HDD into an unoccupied drive bay (call this drive replacement)
  2. Attach replacement to the degraded vdev
  3. A resilver will start automatically. Wait for this to complete.
  4. Detach the failing drive from the pool (call this drive failing)
  5. Remove failing from the chassis

The attach command is:

# zpool attach <pool_name> <failing> <replacement>

And to detach:

# zpool detach <pool> <failing>

There is a slightly shorter method of doing the steps above. I typically do the steps above because it breaks out the "detach" action into a discrete step that I perform myself. It lets me verify the status of the pool before detaching the device. The shortcut attaches replacement, initiates a resilver, and detaches failing all on its own.

# zpool replace <pool> <failing> <replacement>

Scenario #2 - Drive has Totally Failed and Spare has Kicked In

In this scenario the drive has failed totally and is no longer accessible by the SCSI layer. ZFS detects this and kicks in the hot spare to back fill. The pool looks like this:

  mirror-0                   DEGRADED     0     0     0
    spare-0                  DEGRADED     0     0     0
      failed                 REMOVED      0     0     0
      hot_spare              ONLINE       0     0     0
    c1t5000C5004D27DEB6d0    ONLINE       0     0     0
  1. Insert new HDD into an unoccupied drive bay
  2. Initiate a replacement of failed with replacement
  3. Wait for the automatic resilver to complete. Once complete, ZFS will automatically detach failed from the pool.
  4. Remove failed from the chassis
# zpool replace <pool> <failed> <replacement>

This is the way I've always done it because it's very logical and straightforward. When all is done, the hot spare goes back to being a spare and the only change in the pool is that the vdev now has a new member drive in place of the failed drive.

Like Scenario #1, there is a bit of a shortcut here too. When the hot spare is activated to back fill for the failed drive, all data is resilvered onto the spare thereby repairing that vdev. The zpool command reports it as "DEGRADED" however there are two functioning drives in the vdev β€” the vdev is redundant. So in that case, instead of attaching a brand new drive to the vdev and going through another resilver, just attach the new drive as a new hot spare and leave the old hot spare as part of the mirror.

  1. Insert new HDD into an unoccupied drive bay
  2. Detach failed from the pool (this will take the hot spare out of "spare" status and make it a permanent part of the mirror vdev)
  3. Attach replacement as the new hot spare
# zpool status <pool>
      mirror-0                   DEGRADED     0     0     0
        spare-0                  DEGRADED     0     0     0
          failed                 REMOVED      0     0     0
          hot_spare              ONLINE       0     0     0
        c1t5000C5004D27DEB6d0    ONLINE       0     0     0

# zpool detach <pool> <failed>

# zpool status <pool>
      mirror-0                   ONLINE     0     0     0
        former_hot_spare         ONLINE     0     0     0
        c1t5000C5004D27DEB6d0    ONLINE     0     0     0

# zpool add <pool> spare <replacement>
# zpool status <pool>
      mirror-0                   ONLINE     0     0     0
        former_hot_spare         ONLINE     0     0     0
        c1t5000C5004D27DEB6d0    ONLINE     0     0     0
     spares
       replacement               AVAIL