converting a qcow2 volume from thin to sparse, live #12741
-
|
environment: ACS 4.22 with kvm on ubuntu 24.04 and NFS primary/secondary storage pools is there a live way to convert a VM's disks from thin to sparse? I've tried changing the volume's disk offering and migrating the VM to a new host and new storage pool but it's still referencing the original template in the domain definition. There's likely an unmanage+surgery+manage solution but I'm hoping for something more native. |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments
-
|
Nop, not possible imho. I guess it would work if you stop the instance, convert the qcow manually and start the instance again (not tested, but i dont see any reason why this shouldnt work) |
Beta Was this translation helpful? Give feedback.
-
|
In case anyone stumbles on this and finds it useful, I ended up going with an unmanage+blockcopy+importunmanaged approach. A different iteration had me wanting to live migrate NFS volumes to RBD, so I made codex write a script to perform the following operations for a given vm uuid:
That should work similarly enough for the thin to sparse migration too, just haven't gotten around to it yet. The main disadvantages are that ubuntu 24.04's version of libvirt may or may not properly track zero blocks when using blockcopy (vs an offline convert), and I'm guessing usage meter data tied to the vm db id might be inaccurate following the import. I think adding this as a feature is in the realm of possible. Today if you try to live migrate a KVM instance's volume while the VM is running, cloudstack will throw an error that you must migrate the instance to another host simultaneously to refresh the xml domain (unless the destination is StorPool). Doing a simultaneous VM+storage migration works for NFS to NFS (changing the offering has no effect on thin vs sparse), but fails for NFS to RBD. Running a blockcopy job with the appropriate xml payload for the disk and polling its completion feels like an appropriate solution. Doing it live in-place using the same storage pool might require a new volume uuid - perhaps tracked as an independent cloudstack volume with reuse existing for the job until the pivot succeeds. The job completion % polling would be useful user feedback too. In vsphere this would be similar to a storage vmotion and changing the datastore and/or the disk format (thin/thick+lazy/thin+eager). I don't have a frame of reference for how xen would apply this capability. A manual process looks like this for an unmanaged vm: virsh blockcopy your_domain vda an offline migration for nfs->rbd is possible, but offline for thin->sparse didn't seem to work in my experiment with or without a pool migration (and the volume's original provisioning_type stored in the db seems authoritative) both should be doable live using blockcopy, I only went with an unmanage+manage to keep cloudstack sane unrelated to all of that, the ceph storage driver doesn't seem to respect the thin/sparse/thick definitions and in my experiments a sparse disk offering still results in parent/child rbd images it's very possible I'm going about it all wrong, but it works on my machine :) |
Beta Was this translation helpful? Give feedback.
Nop, not possible imho. I guess it would work if you stop the instance, convert the qcow manually and start the instance again (not tested, but i dont see any reason why this shouldnt work)