Partition Expansion on a Live Proxmox VM - From 3GB to 128GB Without Data Loss
The Problem
Tantive-III - my AI/ML compute VM on Node-A (Millennium Falcon) - was provisioned with a 128GB virtual disk via ProxMenux, but the guest OS only saw 3GB. The GPT partition table was created for the original smaller image and never expanded to fill the virtual disk.
This is one of those problems that doesn't announce itself until you try to install NVIDIA drivers and apt tells you there's no space left on device.
The Environment
Host: Node-A (FCM2250 / Millennium Falcon)
Hypervisor: Proxmox VE
VM ID: 201 (Tantive-III)
Guest OS: Debian 13.3 (Trixie)
Storage: fast-lvm (LVM-thin on NVMe)
Disk: 128GB allocated, 3GB usable
The VM config showed the full allocation:
scsi0: fast-lvm:vm-201-disk-1,discard=on,size=128G,ssd=1
But inside the guest:
root@tantive-iii:~# df -h /
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 2.8G 2.4G 240M 91% /
The Fix - From the Proxmox Host
You can't resize a GPT partition table from inside the guest when the disk is managed by LVM-thin. The fix has to happen at the hypervisor level.
Step 1: Shut Down the VM
qm stop 201
Step 2: Map the LVM Volume Partitions
kpartx creates device mapper entries for each partition inside the LVM logical volume:
kpartx -av /dev/vg-fast/vm-201-disk-1
Output:
GPT:Primary header thinks Alt. header is not at the end of the disk.
GPT:Alternate GPT header not at the end of the disk.
GPT: Use GNU Parted to correct GPT errors.
add map vg--fast-vm--201--disk--1p1 (252:11): 0 6027264 linear 252:10 262144
add map vg--fast-vm--201--disk--1p14 (252:12): 0 6144 linear 252:10 2048
add map vg--fast-vm--201--disk--1p15 (252:13): 0 253952 linear 252:10 8192
The GPT warnings are expected - the partition table was written for a smaller disk and doesn't know about the extra space yet.
Step 3: Fix the GPT Table and Expand the Partition
parted /dev/mapper/vg--fast-vm--201--disk--1
Parted immediately detects the mismatch:
Warning: Not all of the space available to /dev/mapper/vg--fast-vm--201--disk--1
appears to be used, you can fix the GPT to use all of the space
(an extra 262144000 blocks) or continue with the current setting?
Fix/Ignore?
Type Fix.
Then resize partition 1 to fill the disk:
(parted) resizepart 1 100%
The 100% tells parted to extend partition 1 to the end of the available space.
Step 4: Check and Resize the Filesystem
# Filesystem check (will find and fix block count mismatches)
e2fsck -f /dev/mapper/vg--fast-vm--201--disk--1p1
Output:
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
Free blocks count wrong (274355, counted=103984).
Fix<y>?
Type y - this corrects the free block count to match the new partition size.
Then resize the ext4 filesystem:
resize2fs /dev/mapper/vg--fast-vm--201--disk--1p1
Step 5: Clean Up and Boot
# Remove the device mapper entries
kpartx -dv /dev/vg-fast/vm-201-disk-1
# Start the VM
qm start 201
Step 6: Verify Inside the Guest
ssh root@192.168.1.201
df -h /
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 126G 2.4G 118G 2% /
128GB available. Zero data loss. The VM didn't even know anything happened - it just woke up with more space.
Why This Happens
ProxMenux (and many Proxmox VM creation tools) allocate the virtual disk at the requested size but install the OS using a minimal partition layout. The guest's partition table reflects the installer's defaults, not the full virtual disk. If the installer was built for a 4GB image, you get a 4GB partition on a 128GB disk.
This is the same problem you'd hit with cloud images on AWS or Azure - the AMI ships with a small root partition and you're expected to expand it post-launch.
Key Commands Reference
| Step | Command | Purpose |
|---|---|---|
| Map partitions | kpartx -av /dev/vg-fast/vm-201-disk-1 |
Expose guest partitions to the host |
| Fix GPT | parted → Fix |
Correct the GPT header for the full disk |
| Expand partition | resizepart 1 100% |
Fill available space |
| Check filesystem | e2fsck -f /dev/mapper/...p1 |
Verify and fix block counts |
| Resize filesystem | resize2fs /dev/mapper/...p1 |
Grow ext4 to match partition |
| Clean up | kpartx -dv /dev/vg-fast/vm-201-disk-1 |
Remove device mapper entries |
What I'd Do Differently
In hindsight, the smarter move is to check df -h immediately after first boot on any new VM and expand before installing anything. The NVIDIA driver install failing due to disk space is what surfaced this - if I'd caught it earlier, I wouldn't have needed to shut down and fix it from the host.
For future VM provisioning, I now run this as part of my post-install checklist:
# Inside the guest - check if partition matches allocated disk
lsblk
df -h /
# If they don't match, expand from the host before proceeding
Related: Post 018 - GPU Passthrough on Proxmox covers the full Tantive-III setup including the NVIDIA driver install that prompted this expansion.