Discussion:
[linux-lvm] Thin metadata volume bigger than 16GB ?
Gionatan Danti
2018-06-22 18:10:34 UTC
Permalink
Hi list,
I wonder if a method exists to have a >16 GB thin metadata volume.

When using a 64 KB chunksize, a maximum of ~16 TB can be addressed in a
single thin pool. The obvious solution is to increase the chunk size, as
128 KB chunks are good for over 30 TB, and so on. However, increasing
chunksize is detrimental for efficiency/performance when heavily using
snapshot.

Another simple and very effective solution is to have multiple thin
pools, ie: 2x 16 GB metadata volumes with 64 KB chunksize is good,
again, for over 30 TB thin pool space.

That said, the naive but straightforward would be to increase maximum
thin pool size. So I have some questions:

- is the 16 GB limit an hard one?
- there are practical consideration to avoid that (eg: slow thin_check
for very bug metadata volumes)?
- if so, why I can not find any similar limit (in the docs) for cache
metadata volumes?
- what is the right thing to do when a 16 GB metadata volume fill-up, if
it can not be expanded?

Thanks.
--
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: ***@assyoma.it - ***@assyoma.it
GPG public key ID: FF5F32A8
Zdenek Kabelac
2018-06-22 20:13:34 UTC
Permalink
Post by Gionatan Danti
Hi list,
I wonder if a method exists to have a >16 GB thin metadata volume.
When using a 64 KB chunksize, a maximum of ~16 TB can be addressed in a single
thin pool. The obvious solution is to increase the chunk size, as 128 KB
chunks are good for over 30 TB, and so on. However, increasing chunksize is
detrimental for efficiency/performance when heavily using snapshot.
2x 16 GB metadata volumes with 64 KB chunksize is good, again, for over 30 TB
thin pool space.
That said, the naive but straightforward would be to increase maximum thin
- is the 16 GB limit an hard one?
Addressing is internally limited to use lower amount of bits.
Post by Gionatan Danti
- there are practical consideration to avoid that (eg: slow thin_check for
very bug metadata volumes)?
Usage of memory resources, efficiency.
Post by Gionatan Danti
- if so, why I can not find any similar limit (in the docs) for cache metadata
volumes?
ATM we do not recommend to use cache with more then 1.000.000 chunks for
better efficiency reasons although on bigger machines bigger amount of chunks
are still quite usable especially now with cache metadata format 2.
Post by Gionatan Danti
- what is the right thing to do when a 16 GB metadata volume fill-up, if it
can not be expanded?
ATM drop data you don't need (fstrim filesystem).

So far there were not many request to support bigger size although there are
plans to improve thin-pool metadata format for next version.

Regards

Zdenek
Gionatan Danti
2018-06-23 10:14:48 UTC
Permalink
Post by Zdenek Kabelac
Addressing is internally limited to use lower amount of bits.
Usage of memory resources, efficiency.
ATM we do not recommend to use cache with more then 1.000.000 chunks
for better efficiency reasons although on bigger machines bigger
amount of chunks are still quite usable especially now with cache
metadata format 2.
Does it means that with a 64 KB cache chunk size I can efficiently cache
only up to 64 KB * 1000000 = ~60 GB volume?
So for, say, a 64 TB volume do I need to use 64 MB cache chunks?
Post by Zdenek Kabelac
ATM drop data you don't need (fstrim filesystem).
So far there were not many request to support bigger size although
there are plans to improve thin-pool metadata format for next version.
Regards
Zdenek
Extremely informative answer, thanks a lot.
--
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: ***@assyoma.it - ***@assyoma.it
GPG public key ID: FF5F32A8
Loading...