Discussion:
[linux-lvm] LVM and I/O block size (max_sectors_kb)
Gionatan Danti
2014-07-02 15:00:50 UTC
Permalink
Hi all,
it seems that, when using LVM, I/O block transfer size has an hard limit
at about 512 KB/iop. Large I/O transfers can be crucial for performance,
so I am trying to understand if I can change that.

Some info: uname -a (CentOS 6.5 x86_64)
Linux blackhole.assyoma.it 2.6.32-431.20.3.el6.x86_64 #1 SMP Thu Jun 19
21:14:45 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
Gionatan Danti
2014-07-31 07:30:18 UTC
Permalink
Post by Gionatan Danti
Hi all,
it seems that, when using LVM, I/O block transfer size has an hard limit
at about 512 KB/iop. Large I/O transfers can be crucial for performance,
so I am trying to understand if I can change that.
Some info: uname -a (CentOS 6.5 x86_64)
Linux blackhole.assyoma.it 2.6.32-431.20.3.el6.x86_64 #1 SMP Thu Jun 19
21:14:45 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
Gionatan Danti
2014-09-11 13:49:02 UTC
Permalink
Post by Gionatan Danti
Hi all,
it seems that, when using LVM, I/O block transfer size has an hard
limit at about 512 KB/iop. Large I/O transfers can be crucial for
performance, so I am trying to understand if I can change that.
Some info: uname -a (CentOS 6.5 x86_64)
Linux blackhole.assyoma.it 2.6.32-431.20.3.el6.x86_64 #1 SMP Thu Jun
19 21:14:45 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
Marian Csontos
2014-09-12 11:54:51 UTC
Permalink
Post by Gionatan Danti
Hi all,
it seems that, when using LVM, I/O block transfer size has an hard
limit at about 512 KB/iop. Large I/O transfers can be crucial for
performance, so I am trying to understand if I can change that.
Some info: uname -a (CentOS 6.5 x86_64)
Linux blackhole.assyoma.it 2.6.32-431.20.3.el6.x86_64 #1 SMP Thu Jun
19 21:14:45 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
Gionatan Danti
2014-09-12 13:46:34 UTC
Permalink
Hi Marian,
1. As there are more disks in the VG, can them all work with larger
max_sectors_kb?
All four disks and relative MD device has max_sectors_kb set to the
required (large) 2048 value.
They all are recent (circa 2010) SATA disks, albeit from different
vendors.
2. Does it work for md device without stacking LVM on top of that?
I did not try to directly use the MD device. Let me test this and I will
report here.
3. What's your Physical extent size?
vgs -ovg_extent_size vg_kvm # or simple vgs -v
PE size is at default (4MiB).
4. If all above is fine then it may be related to LVM.
I think so, because I did some test with a _single_ disk (an SSD,
actually) with and without LVM on top. Without LVM, I see the normal and
expected behavior - IO transfer size increase with max_sectors_kb. With
LVM on top, I see the exact same behavior I am reporting: max IO
transfer size seems capped at 512 KB, regardless the max_sectors_kb
setting.

It really seems something related to LVM layer.
-- Marian
Thanks.
--
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: ***@assyoma.it - ***@assyoma.it
GPG public key ID: FF5F32A8
matthew patton
2014-09-12 15:48:36 UTC
Permalink
my understanding was that this is an OLD problem (as designed?) Conceivably LVM could be written to be smart enough to query all of the underlying PV and arrive at the lowest common value <= PhysicalExtent-Size. But it was probably more expedient to just hard code it to the native BIO value.
Loading...