Recent articles (showing 1-10 out of 69):
ZFS arrays do not support what you may know as RAID level expansion.
There are two ways to increase the capacity of a ZFS pool... either add more disks to the pool (e.g. 3 more disks in RAIDZ1), or replace all the existing disks with larger ones... this is the method discussed here today. These instructions assume that you have followed my installation guide for ZFS. If you have varied from that guide at all, you may need to vary the instructions below. I am not responsible for any data loss by following any of these instructions!
NOTE: you can only increase the size of a mirror or raidz1/2/3 pool using this method.
I am replacing the 4 x 3TB disks in my storage array for 4 x 4TB disks. This is a time consuming process and is risky if using mirror/RAIDZ1 (as you have to degrade the array !) – if you do not have full backups of the contents, do so at your own risk. (if you're using RAIDZ2 then you're just at reduced resilience and a little safer)
First, we want to make sure that the autoexpand option is enabled, this can be run at any time with the following command:
zpool set autoexpand=on zroot Copy
Next, check the status of your ZFS pool to make sure it is healthy... here's the command and the output from my array:
# zpool status
pool: zroot
state: ONLINE
scan: scrub repaired 0 in 6h38m with 0 errors on Thu Nov 8 16:06:21 2012
config:
NAME STATE READ WRITE CKSUM
zroot ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
ada0p2 ONLINE 0 0 0
ada1p2 ONLINE 0 0 0
ada2p2 ONLINE 0 0 0
ada3p2 ONLINE 0 0 0
errors: No known data errors Copy
As you can see, my array consists of 4 members (ada0p2 through ada3p2) and is currently healthy. We're good to proceed!
First we shutdown the machine, and replace one of the disks... I prefer to start with the last disk and work backwards so i'm going to replace ada3... Once replaced, start the machine up again.
Now we can confirm that the disk is missing (and confirm which one) as follows... (command and output listed):
# zpool status
pool: zroot
state: DEGRADED
status: One or more devices has been removed by the administrator.
Sufficient replicas exist for the pool to continue functioning in a
degraded state.
action: Online the device using 'zpool online' or replace the device with
'zpool replace'.
scan: scrub repaired 0 in 6h38m with 0 errors on Thu Nov 8 16:06:21 2012
config:
NAME STATE READ WRITE CKSUM
zroot DEGRADED 0 0 0
raidz1-0 DEGRADED 0 0 0
ada0p2 ONLINE 0 0 0
ada1p2 ONLINE 0 0 0
ada2p2 ONLINE 0 0 0
5075744959138230672 REMOVED 0 0 0 was /dev/ada3p2
errors: No known data errors Copy
You can see that my ada3p2 device is now missing, and the array is degraded (it will run slower while degraded, but no data loss unless another disk fails during this long process)
Now we need to partition the newly installed ada3 disk so that it is bootable and contains a large ZFS partition for us to use... commands as follows:
gpart create -s gpt ada3
gpart add -s 128 -t freebsd-boot ada3
gpart add -t freebsd-zfs -l disk3 ada3
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada3 Copy
The above creates a GPT partition table, adds a small boot loader parition and the remainder of the disk for ZFS. It then installs the boot loader into the small partition.
We are now ready to re-add the disk into the ZFS pool. This will trigger an auto-resilver of the disks (a rebuild of the disk)...
zpool replace zroot ada3p2 /dev/ada3p2 Copy
This command takes a little while to process, so be patient. The resilver stage can take a long time (it depends how much data you have on the pool, how many disks are in it and how fast you can read from them!)
You can check on the status of the rebuild with the following command:
zpool status zroot Copy
Here's an example output so you know what to look for:
pool: zroot
state: DEGRADED
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scan: resilver in progress since Wed Nov 14 18:34:57 2012
16.8G scanned out of 6.59T at 116M/s, 16h30m to go
4.19G resilvered, 0.25% done
config:
NAME STATE READ WRITE CKSUM
zroot DEGRADED 0 0 0
raidz1-0 DEGRADED 0 0 0
ada0p2 ONLINE 0 0 0
ada1p2 ONLINE 0 0 0
ada2p2 ONLINE 0 0 0
replacing-3 REMOVED 0 0 0
5075744959138230672 REMOVED 0 0 0 was /dev/ada3p2
ada3p2 ONLINE 0 0 0 (resilvering)
errors: No known data errors Copy
Once the disk has been fully reconstructed, the array will be healthy again (like at the start), and you can move onto the next disk. Repeat until all disks have been replaced and resilvered.
You will only see the new space once all the disks have finished resilvering.
I will note again that your array is vulnerable if a mirror or raidz1 configuration while doing this. If a 2nd disk fails during the resilver of any of the disks and you're doing a mirror or raidz1 pool, you will LOSE your data.