XFS online enlargement

6 min readNov 26, 2020

However the storage under the VM instances are limited, it doesn’t mean that the storage is limited. If we have a scenario where the database is close to the storage limitation we can enlarge that with the flexibility of the xfs.

We have two scenarios where I used Couchbase (because it’s easy to scale) for setup an emulated database environment on a mounted xfs disk on Google Cloud Platform. The first scenario will be a single-node Couchbase cluster and the second will be a multi-node Couchbase cluster where the data is rebalanced between the nodes.

Single-node cluster enlargement

I won’t go into details about the creation of the cloud instances. This is the REST response of the attached disk:

"autoDelete": true,
"boot": false,
"deviceName": "couchbase-node1-storage",
"diskSizeGb": "200",
"index": 1,
"interface": "SCSI",
"kind": "compute#attachedDisk",
"mode": "READ_WRITE",
"source": "projects/.../couchbase-node1-storage",
"type": "PERSISTENT"

Once we have the instance we can SSH and start the setup of the attached disk.

$ fdisk -l
Disk /dev/sda: 30 GiB, 32212254720 bytes, 62914560 sectors
Disk /dev/sdb: 200 GiB, 214748364800 bytes, 419430400 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

We see that it’s a blank disk, so we have to create a partition on it. Shell into the fdisk, then press n. The default values are good for now, so press enter ~4 times. Then use w to write the changes to disk.

$ fdisk /dev/sdb
Command (m for help): n # means new
Partition type
p primary (0 primary, 0 extended, 4 free)
e extended (container for logical partitions)
Select (default p):
Partition number (1-4, default 1):First sector (2048-419430399, default 2048):Last sector, +sectors or +size{K,M,G,T,P} (2048-419430399, default 419430399):
Created a new partition 1 of type 'Linux' and of size 200 GiB.
Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.

Let’s do fdisk again to see what changed.

$ fdisk -l
Device Boot Start End Sectors Size Id Type
/dev/sdb1 2048 419430399 419428352 200G 83 Linux

So we have a partition but it doesn’t have a filesystem yet. You can check it with df -h, hopefully you won’t see the /dev/sdb1 there.

$ mkfs.xfs 

This will print out the new filesystem’s metadata. Since I’m not an expert in reading filesystem metadata I think that’s a good move forward. The fdisk -l didn’t change nor the df -h. This is because the kernel doesn’t know the location of this new filesystem, so we have to mount it. Mount will attach the newly created filesystem to a specific location in our unix file tree.

$ mkdir /opt/couchbase
$ mount /dev/sdb1 /opt/couchbase
$ df -h
/dev/sdb1 200G 1.5G 199G 1% /opt/couchbase
$ cat /proc/mounts
/dev/sdb1 /opt/couchbase xfs rw,seclabel,relatime,attr2,inode64,noquota 0 0

So we have an xfs filesystem mounted to the base directory of the couchbase. Let’s install a couchbase really quickly and set it up. (I will use Couchbase 6.6, there is a guideline to setup a new cluster.)

$ wget https://packages.couchbase.com/releases/6.6.0/couchbase-server-community-6.6.0-centos8.x86_64.rpm
$ yum install -y couchbase-server-community-6.6.0-centos8.x86_64.rpm
$ systemctl start couchbase-server
$ systemctl status couchbase-server
couchbase-server.service - Couchbase Server
Loaded: loaded (/usr/lib/systemd/system/couchbase-server.service; enabled; vendor preset: disabled)
Active: active (running) since Thu 2020-11-26 10:29:06 UTC; 16s ago

Cool, we have a couchbase node pointing to the mounted filesystem.

Disk space on Couchbase node

I started to load data into it. We can see as the disk grows.

$ watch df -h /dev/sdb1
Filesystem Size Used Avail Use% Mounted on
/dev/sdb1 200G 2.3G 198G 2% /opt/couchbase

That takes some time to fill up 200G of disk space so we can say that the storage is “full”, let’s do an online disk enlargement. For this you have to go to the Disks on the GCP dashboard and pick the couchbase-node1-storage, then increase the size of the disk. If you are still watching the df -h, you won’t see any changes on it. That’s because we have to reformat the disk and use the xfs_growfs tool.

Let’s see everything I installed tmux.

Tmux session to watch the changes

As you see on the right side I took a look at the df -h of the filesystem attached to the /opt/couchbase and the systemctl status of the couchbase-server.

fdisk /dev/sdb
d # delete the partitions on the disk
n # new partition
# defaults everywhere
Partition #1 contains a xfs signature. # WHO CARES!
w # write the changes

After the creation of the new partition I waited 40 seconds just to see whether it crashes something to not. Surprisingly NOTHING.

  • The Go binary loaded the data into the couchbase.
  • The Couchbase Server was alive.
  • The disk usage grows.
  • The number of the elements in the bucket increased.

And these all while I actually repartitioned the underlying filesystem. Under this the df -h still shows me 200G of disk size.

xfs_growfs /dev/sdb1

Nothing crashed this time either and the df -h shows the right disk size, 250G. Couchbase Disk Usage tab was increased as well.

Disk space on Couchbase node after enlargement

Multi-node cluster enlargement

The second scenario will have two nodes. The first already exists and it’s still accepting data from the service. It will continue in this mode. The second node will join to the cluster created by the first node once I repartition the disk under the second node. I will click on the rebalance. If something can break a cluster this will be the process.

I won’t go through the setup process but jump to the point where I’m doing the repartition.

Second node connected to the cluster

Once we have the cluster connection we can do the repartition. I’m watching the same tmux on the second node.

$ fdisk /dev/sdb
Partition #1 contains a xfs signature. # WHO CARES

Once the fdisk has finished I clicked on the rebalance on the top right area of Couchbase server dashboard. Once the rebalance has finished I typed the xfs_growfs command. To represent the time spent with the rebalance I added ask for date command. Based on the outputs, it took 2 min to rebalance the cluster.

$ date
Thu Nov 26 11:37:02 UTC 2020
$ date
Thu Nov 26 11:39:05 UTC 2020
$ xfs_growfs /dev/sdb1

Nothing crashed this time either.

Process of the rebalance


xfs and most of the modern file systems are flexible enough to enlarge their underlying partition even if there is a heavy writing process there. With this in place we can increase the disks under the databases online and under pressure.