KVM | Resize Virtual Machine using LVM

Hello there.

Today I want to take a look at how to resize a virtual machine that uses LVM for its vdisks. As I mentioned in a post before, I am using primarily LVM for VMs on my servers. This makes resizing the disks a bit more of a process though.

The server I am using as an example is running CentOS 7, but this should not really matter. The steps should be identical to other distributions.

Let’s take a look.

List VMs and LVs

First, let’s take a look at what we are working with. As an example, I will be using my router/server which is running nextcloud as a VM. The important parts are highlighted in red.

List KVM pools and volumes

The first command shows the pools.

kvm-router :: ~ » sudo virsh pool-list
 Name                 State      Autostart 
-------------------------------------------
 default              active     yes       
 ISO                  active     yes       
 Virtual_Machines     active     yes       

We have the pool names, now we can list the volumes that are attached to the VMs. All of my VMs are in the “Virtual_Machines” pool.

kvm-router :: ~ » sudo virsh vol-list Virtual_Machines
 Name                 Path                                    
------------------------------------------------------------------------------
 nextcloud            /dev/VM/nextcloud                       
 opnsense             /dev/VM/opnsense                        
 paperless            /dev/VM/paperless                       
 pihole               /dev/VM/pihole

Good. We now know which disk we need to extend. /dev/VM/nextcloud in my case.

List LVM VGs and LVs

I also want to know if I have enough space left on my disk, to actually resize the VM.

kvm-router :: ~ » sudo vgs
  VG              #PV #LV #SN Attr   VSize    VFree  
  EXTERNAL-BACKUP   1   1   0 wz--n- <465.76g      0 
  VM                1   4   0 wz--n- <180.00g <94.00g
  centos            1   2   0 wz--n-  <33.88g   4.00m

As we can see, the volume group has 94GB of free space left. I only need 3GB for the VM right now, so it’s more than enough.

We don’t really need this information, but still. Let’s take a look at the logical volumes.

kvm-router :: ~ » sudo lvs
LV         VG              Attr       LSize    Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  EXT-BACKUP EXTERNAL-BACKUP -wi-a----- <465.76g                                                    
  nextcloud  VM              -wi-ao----   40.00g                                                    
  opnsense   VM              -wi-ao----    9.00g                                                    
  paperless  VM              -wi-ao----   20.00g                                                    
  pihole     VM              -wi-ao----   17.00g                                                    
  root       centos          -wi-ao----   30.00g                                                    
  swap       centos          -wi-a-----   <3.88g                                                    

Nextcloud currently uses 40GB. We will extend it to 43GB.

Resizing the LV on the Host

Now we can begin the resizing process. First, we will grow the LV, after this, we need to inform the VM that the disk size actually changed. Once that’s done, we can SSH into the VM and resize the filesystem.

Growing the LV

I will only give 3 additional GB to the VM. I tend to keep the steps really small since it’s quite simple to grow the filesystem. Shrinking it is a bit of a hassle, if even at all possible.

Beginning with the logical volume. For the path, we use the information we gather from the “virsh vol-list” command.

kvm-router :: ~ » sudo lvextend -L +3G /dev/VM/nextcloud
  Size of logical volume VM/nextcloud changed from 40.00 GiB (10240 extents) to 43.00 GiB (11008 extents).
  Logical volume VM/nextcloud successfully resized.

Inform the VM

Now we need to inform the VM, otherwise, it won’t know about the new disk size. A reboot should also do, though I never tried it.

The “domain” is the name of the VM. We could also use the ID that we get with the “sudo virsh list” command.

kvm-router :: ~ » sudo blockresize --domain nextcloud --path /dev/VM/nextcloud --size +3G
Block device '/dev/VM/nextcloud' is resized

Let us take another look at the logical volume size.

kvm-router :: ~ » sudo lvs
  LV         VG              Attr       LSize    Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  EXT-BACKUP EXTERNAL-BACKUP -wi-a----- <465.76g                                                    
  nextcloud  VM              -wi-ao----   43.00g                                                    
  opnsense   VM              -wi-ao----    9.00g    
....                                                

43GB. Great. Now we can log in to the VM and resize the filesystem.

Resizing the LV and filesystem within the VM

SSH into the VM and take a look at the disk size.

fedora-kde :: ~ » ssh nextcloud
nextcloud :: ~ » lsblk
NAME                  MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sr0                    11:0    1 1024M  0 rom  
vda                   252:0    0   43G  0 disk 
├─vda1                252:1    0    1G  0 part /boot
└─vda2                252:2    0   39G  0 part 
  ├─cl_nextcloud-root 253:0    0   37G  0 lvm  /
  └─cl_nextcloud-swap 253:1    0    2G  0 lvm  

We can see that the disk has 43GB. Since I am using LVM almost everywhere, I have to resize the second partition first (vda2) and then resize the logical volume.

Let’s start with that.

Recreating the partition and informing LVM

The next few steps will “delete” and recreate the partition. It’s more the information of the partition, than the actual partition itself. This sounds way more dangerous than it actually is. But still, you should always have backups.

This will be a bit hard to read, so every input will be marked purple.

nextcloud :: ~ » sudo fdisk /dev/vda
Welcome to fdisk (util-linux 2.32.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Command (m for help): p
Disk /dev/vda: 43 GiB, 46170898432 bytes, 90177536 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x33f2a5cd

Device     Boot   Start      End  Sectors Size Id Type
/dev/vda1  *       2048  2099199  2097152   1G 83 Linux
/dev/vda2       2099200 83886079 81786880  39G 83 Linux

Delete the “root” partition. This is “vda2” in my case. Make a note of the “Start” sector marked red. We will need this for the recreation. Normally this will be set correctly automatically.

Command (m for help): d
Partition number (1,2, default 2): 2

Partition 2 has been deleted.

Next, recreate the partition.

As we can see, the “first sector” is set correctly to 2099200. The “last sector” should be larger than before, which is also correct.

Do not remove the LVM2 signature.

Command (m for help): n
Partition type
   p   primary (1 primary, 0 extended, 3 free)
   e   extended (container for logical partitions)
Select (default p): p
Partition number (2-4, default 2): 2
First sector (2099200-90177535, default 2099200): 
Last sector, +sectors or +size{K,M,G,T,P} (2099200-90177535, default 90177535): 

Created a new partition 2 of type 'Linux' and of size 42 GiB.
Partition #2 contains a LVM2_member signature.

Do you want to remove the signature? [Y]es/[N]o: N

Ok. Let’s take another look at the partitions. If it looks good (which it does in this case), we can write the configuration to disk with “w”.

Keep in mind that until you write the changes to the disk, nothing actually happened. So you can still back out, by typing “q” to quit “fdisk”.

Command (m for help): p

Disk /dev/vda: 43 GiB, 46170898432 bytes, 90177536 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x33f2a5cd

Device     Boot   Start      End  Sectors Size Id Type
/dev/vda1  *       2048  2099199  2097152   1G 83 Linux
/dev/vda2       2099200 90177535 88078336  42G 83 Linux

Command (m for help): w
The partition table has been altered.
Syncing disks.

Another look at the disks.

nextcloud :: ~ » lsblk
NAME                  MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sr0                    11:0    1 1024M  0 rom  
vda                   252:0    0   43G  0 disk 
├─vda1                252:1    0    1G  0 part /boot
└─vda2                252:2    0   42G  0 part 
  ├─cl_nextcloud-root 253:0    0   37G  0 lvm  /
  └─cl_nextcloud-swap 253:1    0    2G  0 lvm  

Ok, now we can inform LVM about the new size.

nextcloud :: ~ » sudo pvresize /dev/sda2
  Physical volume "/dev/vda2" changed
  1 physical volume(s) resized or updated / 0 physical volume(s) not resized

nextcloud :: ~ » sudo pvs
  PV         VG           Fmt  Attr PSize   PFree
  /dev/vda2  cl_nextcloud lvm2 a--  <42.00g 3.00g

Just a few steps left. Resize the logical volume and the filesystem.

Resizing the LV

We should check the logical volumes first.

nextcloud :: ~ » sudo lvs
  LV   VG           Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  root cl_nextcloud -wi-ao---- <37.00g                                                    
  swap cl_nextcloud -wi-a-----   2.00g                                                    

Extend the logical volume “root”. The parameter “-l +100%FREE” grows the logical volume to the maximum limit of the volume group.

nextcloud :: ~ » sudo lvextend -l+100%FREE cl_nextcloud/root
  Size of logical volume cl_nextcloud/root changed from <37.00 GiB (9471 extents) to <40.00 GiB (10239 extents).
  Logical volume cl_nextcloud/root successfully resized.

Checking the logical volumes.

nextcloud :: ~ » sudo lvs
 LV   VG           Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  root cl_nextcloud -wi-ao---- <40.00g                                                    
  swap cl_nextcloud -wi-a-----   2.00g                                                    

Resizing the filesystem

Ok, the last step. We need to resize the filesystem to actually use the free space. First, we need to check the filesystem we are using.

nextcloud :: ~ » lsblk -o NAME,SIZE,TYPE,FSTYPE,MOUNTPOINT
NAME                   SIZE TYPE FSTYPE      MOUNTPOINT
sr0                   1024M rom              
vda                     43G disk             
├─vda1                   1G part ext4        /boot
└─vda2                  42G part LVM2_member 
  ├─cl_nextcloud-root   40G lvm  xfs         /
  └─cl_nextcloud-swap    2G lvm  swap        

I am using xfs for the root partition. Now to resizing it.

nextcloud :: ~ » sudo xfs_growfs /
meta-data=/dev/mapper/cl_nextcloud-root isize=512    agcount=9, agsize=1113856 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=1, rmapbt=0
         =                       reflink=1
data     =                       bsize=4096   blocks=9698304, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 9698304 to 10484736

That’s it. We grew the virtual machine without any downtime.

Here is a list of the commands without the fluff.

Short list

// KVM Host
kvm-router :: ~ » sudo lvextend -L +3G /dev/VM/nextcloud
kvm-router :: ~ » sudo blockresize --domain nextcloud --path /dev/VM/nextcloud --size +3G

// Virtual Machine
nextcloud :: ~ » sudo fdisk /dev/vda
Delete and recreate the partition
nextcloud :: ~ » sudo pvresize /dev/sda2
nextcloud :: ~ » sudo lvextend -l+100%FREE cl_nextcloud/root
nextcloud :: ~ » sudo xfs_growfs /

Leave a Reply