Discussion:
[linux-lvm] Why is my LVM Mirror performance so bad?
Errol Neal
2014-09-27 17:42:34 UTC
Permalink
Hi There. Hoping to get some clarity on some performance woes in a little testing environment that I've setup.

I am experimenting with LVM mirrors on a CentOS 6.5 cluster.

I have two VMs running on ESX5.5 that are sharing two RDM luns from an SRP target server running SCST.

Performance however seems to take a huge hit compared to a vanilla volume


[***@scst1 ~]# vgs
VG #PV #LV #SN Attr VSize VFree
hadata 2 5 0 wz--nc 999.99g 149.99g
vg_scst1 1 2 0 wz--n- 39.51g 0

[***@scst1 ~]# lvs -a -o +devices
LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert Devices
mirror1 hadata mwi-a-m--- 100.00g 100.00 mirror1_mimage_0(0),mirror1_mimage_1(0)
[mirror1_mimage_0] hadata iwi-aom--- 100.00g /dev/sdb1(0)
[mirror1_mimage_1] hadata iwi-aom--- 100.00g /dev/sdc1(0)
mirror2 hadata mwi-a-m--- 100.00g 100.00 mirror2_mimage_0(0),mirror2_mimage_1(0)
[mirror2_mimage_0] hadata iwi-aom--- 100.00g /dev/sdb1(25600)
[mirror2_mimage_1] hadata iwi-aom--- 100.00g /dev/sdc1(25600)
mirror3 hadata mwi-a-m--- 100.00g 100.00 mirror3_mimage_0(0),mirror3_mimage_1(0)
[mirror3_mimage_0] hadata iwi-aom--- 100.00g /dev/sdb1(51200)
[mirror3_mimage_1] hadata iwi-aom--- 100.00g /dev/sdc1(51200)
mirror4 hadata mwi-a-m--- 100.00g 100.00 mirror4_mimage_0(0),mirror4_mimage_1(0)
[mirror4_mimage_0] hadata iwi-aom--- 100.00g /dev/sdb1(76800)
[mirror4_mimage_1] hadata iwi-aom--- 100.00g /dev/sdc1(76800)
test hadata -wi-a----- 50.00g /dev/sdb1(102400)
lv_root vg_scst1 -wi-ao---- 31.65g /dev/sda2(0)
lv_swap vg_scst1 -wi-ao---- 7.86g /dev/sda2(8102)

[***@scst1 ~]# dd if=/dev/sdb1 of=/dev/null bs=1M count=20000
20000+0 records in
20000+0 records out
20971520000 bytes (21 GB) copied, 41.0031 s, 511 MB/s

[***@scst1 ~]# dd if=/dev/hadata/test of=/dev/null bs=1M count=20000
20000+0 records in
20000+0 records out
20971520000 bytes (21 GB) copied, 44.344 s, 473 MB/s

[***@scst1 ~]# dd if=/dev/hadata/mirror1 of=/dev/null bs=1M count=20000
20000+0 records in
20000+0 records out
20971520000 bytes (21 GB) copied, 365.685 s, 57.3 MB/s


Is this just the life of a LVM mirror and the performance here is about as good as it gets?
Micky
2014-09-27 20:35:24 UTC
Permalink
Because it is LVM! :)
Post by Errol Neal
Hi There. Hoping to get some clarity on some performance woes in a little
testing environment that I've setup.
I am experimenting with LVM mirrors on a CentOS 6.5 cluster.
I have two VMs running on ESX5.5 that are sharing two RDM luns from an SRP
target server running SCST.
Performance however seems to take a huge hit compared to a vanilla volume
VG #PV #LV #SN Attr VSize VFree
hadata 2 5 0 wz--nc 999.99g 149.99g
vg_scst1 1 2 0 wz--n- 39.51g 0
LV VG Attr LSize Pool Origin Data% Move
Log Cpy%Sync Convert Devices
mirror1 hadata mwi-a-m--- 100.00g
100.00 mirror1_mimage_0(0),mirror1_mimage_1(0)
[mirror1_mimage_0] hadata iwi-aom--- 100.00g
/dev/sdb1(0)
[mirror1_mimage_1] hadata iwi-aom--- 100.00g
/dev/sdc1(0)
mirror2 hadata mwi-a-m--- 100.00g
100.00 mirror2_mimage_0(0),mirror2_mimage_1(0)
[mirror2_mimage_0] hadata iwi-aom--- 100.00g
/dev/sdb1(25600)
[mirror2_mimage_1] hadata iwi-aom--- 100.00g
/dev/sdc1(25600)
mirror3 hadata mwi-a-m--- 100.00g
100.00 mirror3_mimage_0(0),mirror3_mimage_1(0)
[mirror3_mimage_0] hadata iwi-aom--- 100.00g
/dev/sdb1(51200)
[mirror3_mimage_1] hadata iwi-aom--- 100.00g
/dev/sdc1(51200)
mirror4 hadata mwi-a-m--- 100.00g
100.00 mirror4_mimage_0(0),mirror4_mimage_1(0)
[mirror4_mimage_0] hadata iwi-aom--- 100.00g
/dev/sdb1(76800)
[mirror4_mimage_1] hadata iwi-aom--- 100.00g
/dev/sdc1(76800)
test hadata -wi-a----- 50.00g
/dev/sdb1(102400)
lv_root vg_scst1 -wi-ao---- 31.65g
/dev/sda2(0)
lv_swap vg_scst1 -wi-ao---- 7.86g
/dev/sda2(8102)
20000+0 records in
20000+0 records out
20971520000 bytes (21 GB) copied, 41.0031 s, 511 MB/s
20000+0 records in
20000+0 records out
20971520000 bytes (21 GB) copied, 44.344 s, 473 MB/s
20000+0 records in
20000+0 records out
20971520000 bytes (21 GB) copied, 365.685 s, 57.3 MB/s
Is this just the life of a LVM mirror and the performance here is about as good as it gets?
_______________________________________________
linux-lvm mailing list
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
Zdenek Kabelac
2014-09-29 08:59:11 UTC
Permalink
Post by Micky
Because it is LVM! :)
And that's what I call 'Very helpful answer'
Post by Micky
Hi There. Hoping to get some clarity on some performance woes in a little
testing environment that I've setup.
I am experimenting with LVM mirrors on a CentOS 6.5 cluster.
Are you using 'exclusive' single node activation - or is your
mirror active on multiple nodes with cmirrord ?
Post by Micky
I have two VMs running on ESX5.5 that are sharing two RDM luns from an SRP
target server running SCST.
Performance however seems to take a huge hit compared to a vanilla volume
VG #PV #LV #SN Attr VSize VFree
hadata 2 5 0 wz--nc 999.99g 149.99g
vg_scst1 1 2 0 wz--n- 39.51g 0
lvs -a -o+lv_active


If you don't need cluster-wide mirror activation - use the local version:

lvchange -aey vg/mirror


Note - new mdraid is supported only in 'exlusive' mode.

Zdenek
Linda A. Walsh
2014-09-29 18:15:25 UTC
Permalink
Post by Errol Neal
Is this just the life of a LVM mirror and the performance here is about as good as it gets?
----
It's a bit confusing from what you have above, but *usually*, if you
want performance, you set up 1 duplicate device (disk) to duplicate another
of the same type. That way when the controller is writing to 1, it will
be able to
write to the other in the same location and expect *similar* speeds
(speeds will
vary more on desktop drives vs. enterprise, so overall RAID performance
is lost).
Brassow Jonathan
2014-09-30 03:27:56 UTC
Permalink
Post by Errol Neal
Hi There. Hoping to get some clarity on some performance woes in a little testing environment that I've setup.
I am experimenting with LVM mirrors on a CentOS 6.5 cluster.
I have two VMs running on ESX5.5 that are sharing two RDM luns from an SRP target server running SCST.
Performance however seems to take a huge hit compared to a vanilla volume
VG #PV #LV #SN Attr VSize VFree
hadata 2 5 0 wz--nc 999.99g 149.99g
vg_scst1 1 2 0 wz--n- 39.51g 0
LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert Devices
mirror1 hadata mwi-a-m--- 100.00g 100.00 mirror1_mimage_0(0),mirror1_mimage_1(0)
[mirror1_mimage_0] hadata iwi-aom--- 100.00g /dev/sdb1(0)
[mirror1_mimage_1] hadata iwi-aom--- 100.00g /dev/sdc1(0)
mirror2 hadata mwi-a-m--- 100.00g 100.00 mirror2_mimage_0(0),mirror2_mimage_1(0)
[mirror2_mimage_0] hadata iwi-aom--- 100.00g /dev/sdb1(25600)
[mirror2_mimage_1] hadata iwi-aom--- 100.00g /dev/sdc1(25600)
mirror3 hadata mwi-a-m--- 100.00g 100.00 mirror3_mimage_0(0),mirror3_mimage_1(0)
[mirror3_mimage_0] hadata iwi-aom--- 100.00g /dev/sdb1(51200)
[mirror3_mimage_1] hadata iwi-aom--- 100.00g /dev/sdc1(51200)
mirror4 hadata mwi-a-m--- 100.00g 100.00 mirror4_mimage_0(0),mirror4_mimage_1(0)
[mirror4_mimage_0] hadata iwi-aom--- 100.00g /dev/sdb1(76800)
[mirror4_mimage_1] hadata iwi-aom--- 100.00g /dev/sdc1(76800)
test hadata -wi-a----- 50.00g /dev/sdb1(102400)
lv_root vg_scst1 -wi-ao---- 31.65g /dev/sda2(0)
lv_swap vg_scst1 -wi-ao---- 7.86g /dev/sda2(8102)
20000+0 records in
20000+0 records out
20971520000 bytes (21 GB) copied, 41.0031 s, 511 MB/s
20000+0 records in
20000+0 records out
20971520000 bytes (21 GB) copied, 44.344 s, 473 MB/s
20000+0 records in
20000+0 records out
20971520000 bytes (21 GB) copied, 365.685 s, 57.3 MB/s
Is this just the life of a LVM mirror and the performance here is about as good as it gets?
Is the performance also degraded in this way when using single machine (i.e. not cluster)?

brassow

Loading...