Discussion:
[libvirt-users] Reintroduce "allocate entire disk" checkbox on virt-manager
Gionatan Danti
2018-06-18 20:05:09 UTC
Permalink
Hi list,
on older virt-manager versions (ie: what shipped with RHEL 6), a
checkbox called "allocate entire disk" was selectable when configuring a
new virtual machine. When checked, it means that the RAW disk image file
was entirely allocated, generally issuing a fallocate() call. When
unchecked, the disk image was a sparse file, with on-demand space
allocation.

On new virt-manager versions (ie: what ships with RHEL 7), the checkbox
is gone. This means that for creating a sparse allocated file from
within the "new vm" wizard, one is forced to use a Qcow2 file
(selectable in the global preferences). No sparse RAM images can be
created within such wizard.

As a heavy consumer of RAW disk files, I would really like to have the
checkbox back, especially in RHEL/CentOS 7.x
Do you plan to reintroduce it? For RHEL/CentOS, should I open a Bugzilla
ticket?
Thanks.
--
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: ***@assyoma.it - ***@assyoma.it
GPG public key ID: FF5F32A8
Cole Robinson
2018-06-19 18:14:32 UTC
Permalink
Post by Gionatan Danti
Hi list,
on older virt-manager versions (ie: what shipped with RHEL 6), a
checkbox called "allocate entire disk" was selectable when configuring a
new virtual machine. When checked, it means that the RAW disk image file
was entirely allocated, generally issuing a fallocate() call. When
unchecked, the disk image was a sparse file, with on-demand space
allocation.
On new virt-manager versions (ie: what ships with RHEL 7), the checkbox
is gone. This means that for creating a sparse allocated file from
within the "new vm" wizard, one is forced to use a Qcow2 file
(selectable in the global preferences). No sparse RAM images can be
created within such wizard.
As a heavy consumer of RAW disk files, I would really like to have the
checkbox back, especially in RHEL/CentOS 7.x
Do you plan to reintroduce it? For RHEL/CentOS, should I open a Bugzilla
ticket?
If you change the disk image format from qcow2 to raw in
Edit->Preferences, then new disk images are set to fully allocated raw.
Check the image details with 'qemu-img info $filename' to confirm. So I
think by default we are doing what you want?

- Cole
Gionatan Danti
2018-06-19 19:37:19 UTC
Permalink
Post by Cole Robinson
If you change the disk image format from qcow2 to raw in
Edit->Preferences, then new disk images are set to fully allocated raw.
Check the image details with 'qemu-img info $filename' to confirm. So I
think by default we are doing what you want?
- Cole
Er, the point is that I would really like to have a *sparse* RAW image
file. On older virt-manager, unchecking "allocate entire disk" was what
I normally used. Auto-allocating all disk space without a mean to avoid
that has two main drawbacks:
- you can't have sparse/thin volumes;
- allocating on a fallocate-less filesystem is excruciatingly slow and
cause of unneeded wear on SSDs.

Why using a sparse RAW image rather than a Qcow2 image? Basically:
- RAW disks are easier to handle/inspect in case something goes wrong;
- avoid double CoW on CoW-enabled filesystems (eg: ZFS, btrfs);
- better performance (no Qcow2 L2 chunk cache range, etc).

It is worth nothing that oVirt (and RHEV) uses (sparse or allocated,
based on user selection) base RAW files with eventual Qcow2 overlays for
snapshots.

Thanks.
--
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: ***@assyoma.it - ***@assyoma.it
GPG public key ID: FF5F32A8
Cole Robinson
2018-06-19 20:16:43 UTC
Permalink
Post by Gionatan Danti
Post by Cole Robinson
If you change the disk image format from qcow2 to raw in
Edit->Preferences, then new disk images are set to fully allocated raw.
Check the image details with 'qemu-img info $filename' to confirm. So I
think by default we are doing what you want?
- Cole
Er, the point is that I would really like to have a *sparse* RAW image
file. On older virt-manager, unchecking "allocate entire disk" was what
I normally used. Auto-allocating all disk space without a mean to avoid
- you can't have sparse/thin volumes;
- allocating on a fallocate-less filesystem is excruciatingly slow and
cause of unneeded wear on SSDs.
- RAW disks are easier to handle/inspect in case something goes wrong;
- avoid double CoW on CoW-enabled filesystems (eg: ZFS, btrfs);
- better performance (no Qcow2 L2 chunk cache range, etc).
It is worth nothing that oVirt (and RHEV) uses (sparse or allocated,
based on user selection) base RAW files with eventual Qcow2 overlays for
snapshots.
Sorry, I misunderstood. You can still achieve what you want but it's
more clicks: new vm, manage storage, add volume, and select raw volume
with whatever capacity you want but with 0 allocation.

qcow2 is the default for virt-manager because it enables features like
snapshots out of the box. The main motivation I have largely heard for
wanting raw over qcow2 is performance, but then using sparse raw
actually makes raw less performant, so it's kind of a weird middle
ground. For that reason I don't think it warrants adding back the
checkbox to the new VM UI since I think it's a fairly obscure use case,
and it can be achieved through the 'manage storage' wizard albeit with
more clicks

- Cole
Gionatan Danti
2018-06-19 23:06:32 UTC
Permalink
Post by Cole Robinson
Sorry, I misunderstood. You can still achieve what you want but it's
more clicks: new vm, manage storage, add volume, and select raw volume
with whatever capacity you want but with 0 allocation.
Sure, but the automatic disk creation is very handy and much less error
prone.
As it is now, if using a fallocate-less filesystem (eg: ZFS) and *not*
selecting to create a custom disk, you risk waiting minutes or hours for
libvirt to fully allocate the image by writing 0s to the disk file. This
can wreck havok on SSDs and other eundurance-limited medium.
Post by Cole Robinson
qcow2 is the default for virt-manager because it enables features like
snapshots out of the box. The main motivation I have largely heard for
wanting raw over qcow2 is performance, but then using sparse raw
actually makes raw less performant, so it's kind of a weird middle
ground. For that reason I don't think it warrants adding back the
checkbox to the new VM UI since I think it's a fairly obscure use case,
and it can be achieved through the 'manage storage' wizard albeit with
more clicks
- Cole
On CoW filesystems, sparse RAW files are faster then Qcow2 ones.
Moreover, avoiding double CoW is important for SSDs (which have limited
lifespan). Even on XFS, sparse RAW files should be faster in the long
run than Qcow2 files, due to no weird limitation on L2 chunk cache size.

I found the checkbox quite self-explanatory and very handy. Any chances
to reconsider your decision?
Thanks.
--
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: ***@assyoma.it - ***@assyoma.it
GPG public key ID: FF5F32A8
Gionatan Danti
2018-06-22 17:47:00 UTC
Permalink
Post by Gionatan Danti
Post by Cole Robinson
Sorry, I misunderstood. You can still achieve what you want but it's
more clicks: new vm, manage storage, add volume, and select raw volume
with whatever capacity you want but with 0 allocation.
Sure, but the automatic disk creation is very handy and much less error
prone.
As it is now, if using a fallocate-less filesystem (eg: ZFS) and *not*
selecting to create a custom disk, you risk waiting minutes or hours
for libvirt to fully allocate the image by writing 0s to the disk
file. This can wreck havok on SSDs and other eundurance-limited
medium.
Post by Cole Robinson
qcow2 is the default for virt-manager because it enables features like
snapshots out of the box. The main motivation I have largely heard for
wanting raw over qcow2 is performance, but then using sparse raw
actually makes raw less performant, so it's kind of a weird middle
ground. For that reason I don't think it warrants adding back the
checkbox to the new VM UI since I think it's a fairly obscure use case,
and it can be achieved through the 'manage storage' wizard albeit with
more clicks
- Cole
On CoW filesystems, sparse RAW files are faster then Qcow2 ones.
Moreover, avoiding double CoW is important for SSDs (which have
limited lifespan). Even on XFS, sparse RAW files should be faster in
the long run than Qcow2 files, due to no weird limitation on L2 chunk
cache size.
I found the checkbox quite self-explanatory and very handy. Any
chances to reconsider your decision?
Thanks.
Hi, sorry for the bump... any feedback about that?
Thanks.
--
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: ***@assyoma.it - ***@assyoma.it
GPG public key ID: FF5F32A8
Cole Robinson
2018-06-26 21:49:09 UTC
Permalink
Post by Gionatan Danti
Post by Cole Robinson
Sorry, I misunderstood. You can still achieve what you want but it's
more clicks: new vm, manage storage, add volume, and select raw volume
with whatever capacity you want but with 0 allocation.
Sure, but the automatic disk creation is very handy and much less error
prone.
As it is now, if using a fallocate-less filesystem (eg: ZFS) and *not*
selecting to create a custom disk, you risk waiting minutes or hours for
libvirt to fully allocate the image by writing 0s to the disk file. This
can wreck havok on SSDs and other eundurance-limited medium.
Post by Cole Robinson
qcow2 is the default for virt-manager because it enables features like
snapshots out of the box. The main motivation I have largely heard for
wanting raw over qcow2 is performance, but then using sparse raw
actually makes raw less performant, so it's kind of a weird middle
ground. For that reason I don't think it warrants adding back the
checkbox to the new VM UI since I think it's a fairly obscure use case,
and it can be achieved through the 'manage storage' wizard albeit with
more clicks
- Cole
On CoW filesystems, sparse RAW files are faster then Qcow2 ones.
Moreover, avoiding double CoW is important for SSDs (which have limited
lifespan). Even on XFS, sparse RAW files should be faster in the long
run than Qcow2 files, due to no weird limitation on L2 chunk cache size.
I found the checkbox quite self-explanatory and very handy. Any chances
to reconsider your decision?
I see it as another test case and larger UI surface in the common path
for something that will save clicks for a corner case. I still don't see
it asworth exposing in the UI.

- Cole
Gionatan Danti
2018-06-28 10:35:56 UTC
Permalink
Post by Cole Robinson
I see it as another test case and larger UI surface in the common path
for something that will save clicks for a corner case. I still don't see
it asworth exposing in the UI.
- Cole
I can not force this decision, obviously. However, let me recap why I
found it important to have the "allocate disk now" checkbox:

- RAW files, even sparse one, are faster than Qcow2 files in the long
run (ie: when block allocation is >8 GB);
- Qcow2 snapshots have significant gotchas (ie: the guest is suspended
during the snapshot), while using RAW files will at least prevent using
virt-manager snapshot feature without thinking;
- on CoW filesystems, using Qcow2 files will means *double* CoW with a)
reduced performance and b) more wear on SSDs;
- on filesystems not supporting fallocate, libvirtd reverts to "write
zeroes to the entire file) which is both a) very slow and b) detrimental
to SSDs life;
- most other virtualization platform (old virt-manager and current oVirt
included) split the choice of file format from the allocation policy.

I 100% agree that, using the custom disk creation mask, what I ask it
entirely possible with virt-manager today. However, it would be *very*
handy to have the checkbox back in the VM wizard itself.

Would opening a BZ ticket at least reopen the possibility to reconsider
that decision?
Thanks anyway.
--
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: ***@assyoma.it - ***@assyoma.it
GPG public key ID: FF5F32A8
Daniel P. Berrangé
2018-06-28 10:44:24 UTC
Permalink
Post by Cole Robinson
I see it as another test case and larger UI surface in the common path
for something that will save clicks for a corner case. I still don't see
it asworth exposing in the UI.
- Cole
I can not force this decision, obviously. However, let me recap why I found
- RAW files, even sparse one, are faster than Qcow2 files in the long run
(ie: when block allocation is >8 GB);
There is always a performance distinction between raw and qcow2, but it is
much less these days with qcow2v3 than it was with the original qcow2
design.
- Qcow2 snapshots have significant gotchas (ie: the guest is suspended
during the snapshot), while using RAW files will at least prevent using
virt-manager snapshot feature without thinking;
This is really tangential. virt-manager chose to use internal snapshots
because they were easy to support, but it could equally use external
snapshots. This shouldn't have a bearing on other choices - if the
internal snapshotting is unacceptable due to the guest pause, this
needs addressing regardless of allocation.
- on CoW filesystems, using Qcow2 files will means *double* CoW with a)
reduced performance and b) more wear on SSDs;
Using qcow2 doesn't require you to use cow at the disk image layer - it
simply gives you the ability, should you want to. So you don't get double
cow by default
- on filesystems not supporting fallocate, libvirtd reverts to "write zeroes
to the entire file) which is both a) very slow and b) detrimental to SSDs
life;
Which widely used modern filesystems still don't support fallocate. It is
essentially a standard feature on any modern production quality filesystem
these days.


Regards,
Daniel
--
|: https://berrange.com -o- https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o- https://fstop138.berrange.com :|
|: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|
Gionatan Danti
2018-06-28 12:24:44 UTC
Permalink
Post by Daniel P. Berrangé
There is always a performance distinction between raw and qcow2, but it is
much less these days with qcow2v3 than it was with the original qcow2
design.
Sure, but especially with random reads/writes over large LBA range the
difference is noticeable [1]. Moreover, if something goes wrong, a RAW
file can be inspected with standard block device tools. As a reference
point, both oVirt and RHEL uses RAW files for base disk images.

It's not only performance related, but it regards thin-provision also.
Why the wizard should automatically select fat provisioning based on
image format? What if I want thin-provisioning using filesystem's sparse
file support via RAW files?
Post by Daniel P. Berrangé
This is really tangential. virt-manager chose to use internal snapshots
because they were easy to support, but it could equally use external
snapshots. This shouldn't have a bearing on other choices - if the
internal snapshotting is unacceptable due to the guest pause, this
needs addressing regardless of allocation.
I agree, but currently the wizard force you to do a choice between:
a) sparse Qcow2 file, with (sometime dangerous?) internal snapshot
support;
b) fully allocated RAW files, with *no* external snapshot support.
As you can see, it is virt-manager itself that entangles the choices
regarding file format/allocation/snapshot support.
And external snapshot support in virt-manager would be *super* cool ;)
Post by Daniel P. Berrangé
Using qcow2 doesn't require you to use cow at the disk image layer - it
simply gives you the ability, should you want to. So you don't get double
cow by default
I badly expressed the idea, sorry. Writing to a *snapshotted* Qcow2 file
causes double CoW; on the other hand, writing to an un-snapshotted Qcow2
file only causes double block allocation.
Post by Daniel P. Berrangé
Which widely used modern filesystems still don't support fallocate. It is
essentially a standard feature on any modern production quality filesystem
these days.
True, with an exception: ZFS. And it is a *big* exception. Moreover, why
allocate all data by default when using RAW files? What about thin
images?

What really strikes me is that the checkbox *was* here in previous
virt-manager releases. Did it caused confusion or some other problems?

Thanks.

[1] https://www.linux-kvm.org/images/9/92/Qcow2-why-not.pdf
--
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: ***@assyoma.it - ***@assyoma.it
GPG public key ID: FF5F32A8
Loading...