Discussion:
[linux-lvm] Does LVM RAID1 have TRIM support?
Jarkko Oranen
2014-08-24 16:08:17 UTC
Permalink
Hello

Yesterday I experimented a bit with my RAID configuration on a pair of
SSDs, and it seems that LVM's native RAID does not have TRIM support...
At least, when I try to run fstrim manually, it complains even though
issue_discards is enabled. Plain LVs on top of an MD RAID PV do work, of
course.

Am I perhaps missing some configuration, or do RAID1 logical volumes
simply not have support for TRIM yet? I'm running a fairly recent kernel
(3.15.8) and lvm version says this:

LVM version: 2.02.106(2) (2014-04-10)
Library version: 1.02.85 (2014-04-10)
Driver version: 4.27.0

As an aside, can anyone point me to documentation or other resources
about the pros and cons of LVM native RAID1 setup (which I understand
uses MD RAID internally?) vs. MD RAID PV + LVM. It seems I might be able
to save some SSD space and only mirror the LVs I actually need to keep
safe from crashes.

--
Jarkko Oranen
Brassow Jonathan
2014-09-02 21:49:55 UTC
Permalink
Post by Jarkko Oranen
Hello
Yesterday I experimented a bit with my RAID configuration on a pair of
SSDs, and it seems that LVM's native RAID does not have TRIM support...
At least, when I try to run fstrim manually, it complains even though
issue_discards is enabled. Plain LVs on top of an MD RAID PV do work, of
course.
Am I perhaps missing some configuration, or do RAID1 logical volumes
simply not have support for TRIM yet? I'm running a fairly recent kernel
LVM version: 2.02.106(2) (2014-04-10)
Library version: 1.02.85 (2014-04-10)
Driver version: 4.27.0
TRIM is not yet supported in LVM RAID. However, if MD has a solid TRIM implementation, it should be simple to enable it for LVM. (This is because the MD kernel modules are used to perform RAID for LVM. There is only a thin wrapper layer (linux/drivers/md/dm-raid.c) in device-mapper used to set-up the device.)
Post by Jarkko Oranen
As an aside, can anyone point me to documentation or other resources
about the pros and cons of LVM native RAID1 setup (which I understand
uses MD RAID internally?) vs. MD RAID PV + LVM. It seems I might be able
to save some SSD space and only mirror the LVs I actually need to keep
safe from crashes.
I don't know if there is a specific list to point to out there, but I can give you a couple pros/cons.
PRO:
- use one volume manager instead of two
- LVM is better suited to creating devices of varying sizes - leaving spare capacity for snapshots, etc

CONS:
- no trim support with RAID through LVM (although, I'm not sure of the state in MD)
- no reshaping (changing from one RAID type to another) capability in LVM RAID.

brassow
Jarkko Oranen
2014-09-04 11:06:40 UTC
Permalink
This post might be inappropriate. Click to display it.
Brassow Jonathan
2014-09-08 17:47:57 UTC
Permalink
Post by Jarkko Oranen
Post by Brassow Jonathan
TRIM is not yet supported in LVM RAID. However, if MD has a solid TRIM implementation, it should be simple to enable it for LVM. (This is because the MD kernel modules are used to perform RAID for LVM. There is only a thin wrapper layer (linux/drivers/md/dm-raid.c) in device-mapper used to set-up the device.)
Thanks for the reply! Unfortunately, "simple" is relative.
By "simple", I mean that from our side all we should need to do is set 'num_discard_bios = 1' in dm-raid.c. That is, if things are working fine in MD. We could even make that conditional depending on the type of RAID (e.g. discards for RAID1, but not RAID5). It could end up being that simple, but I need to know if MD has proper support for discards and all the kinks have been worked out.

brassow
Jarkko Oranen
2014-09-08 18:47:28 UTC
Permalink
Post by Brassow Jonathan
Post by Jarkko Oranen
Post by Brassow Jonathan
TRIM is not yet supported in LVM RAID. However, if MD has a solid TRIM implementation, it should be simple to enable it for LVM. (This is because the MD kernel modules are used to perform RAID for LVM. There is only a thin wrapper layer (linux/drivers/md/dm-raid.c) in device-mapper used to set-up the device.)
Thanks for the reply! Unfortunately, "simple" is relative.
By "simple", I mean that from our side all we should need to do is set 'num_discard_bios = 1' in dm-raid.c. That is, if things are working fine in MD. We could even make that conditional depending on the type of RAID (e.g. discards for RAID1, but not RAID5). It could end up being that simple, but I need to know if MD has proper support for discards and all the kinks have been worked out.
dm-raid.c already has

ti->num_discard_bios = 1;

in raid_ctr in the kernel source I'm looking at (3.15.6; not the most
recent, but it's what I happen to have locally available). As I said, to
me it looks like it should already work, but it doesn't, so I spent a
bit of time trying to understand why.

Digging at it again, eg. lsblk -D seems to report discard information
identical to other discard-supporting block devices, suggesting that
discard passthrough should work fine, but the actual ioctl fails (when
testing via either fstrim or blkdiscard) with "not supported" for
whatever reason.

As far as I know, discard support in MD is stable, and has been so for a
long while; my current setup consists of linear logical volumes on top
of a software RAID PV created with mdadm, and I haven't had a single
issue with discards. It seems RAID1 discard support was merged in
January 2011 (git commit 5fc2ffeabb9ee0fc0e71ff16b49f34f0ed3d05b4), so
it's definitely not new.

--
Jarkko
Brassow Jonathan
2014-09-11 23:31:06 UTC
Permalink
Post by Jarkko Oranen
Post by Brassow Jonathan
Post by Jarkko Oranen
Post by Brassow Jonathan
TRIM is not yet supported in LVM RAID. However, if MD has a solid TRIM implementation, it should be simple to enable it for LVM. (This is because the MD kernel modules are used to perform RAID for LVM. There is only a thin wrapper layer (linux/drivers/md/dm-raid.c) in device-mapper used to set-up the device.)
Thanks for the reply! Unfortunately, "simple" is relative.
By "simple", I mean that from our side all we should need to do is set 'num_discard_bios = 1' in dm-raid.c. That is, if things are working fine in MD. We could even make that conditional depending on the type of RAID (e.g. discards for RAID1, but not RAID5). It could end up being that simple, but I need to know if MD has proper support for discards and all the kinks have been worked out.
dm-raid.c already has
ti->num_discard_bios = 1;
in raid_ctr in the kernel source I'm looking at (3.15.6; not the most
recent, but it's what I happen to have locally available). As I said, to
me it looks like it should already work, but it doesn't, so I spent a
bit of time trying to understand why.
Digging at it again, eg. lsblk -D seems to report discard information
identical to other discard-supporting block devices, suggesting that
discard passthrough should work fine, but the actual ioctl fails (when
testing via either fstrim or blkdiscard) with "not supported" for
whatever reason.
As far as I know, discard support in MD is stable, and has been so for a
long while; my current setup consists of linear logical volumes on top
of a software RAID PV created with mdadm, and I haven't had a single
issue with discards. It seems RAID1 discard support was merged in
January 2011 (git commit 5fc2ffeabb9ee0fc0e71ff16b49f34f0ed3d05b4), so
it's definitely not new.
dm-raid.c does not have that - dm-raid1.c does. dm-raid1.c is a mirroring implementation specific to device-mapper. It is used by LVM for pvmove functionality (and is otherwise being slowly phased out). dm-raid.c is the device-mapper wrapper around the MD RAID personalities. It does not have discard support.

I will ask Neil what the current state of discard support is in MD. I see that "support" was added in Oct of 2012 (starting at commit 2ff8cc2), but I remember them having problems with discards + any resync operations. If Neil says it is ok, I will turn it on.

brassow
Jarkko Oranen
2014-09-12 07:24:41 UTC
Permalink
Post by Brassow Jonathan
dm-raid.c does not have that - dm-raid1.c does. dm-raid1.c is a mirroring implementation specific to device-mapper. It is used by LVM for pvmove functionality (and is otherwise being slowly phased out). dm-raid.c is the device-mapper wrapper around the MD RAID personalities. It does not have discard support.
I will ask Neil what the current state of discard support is in MD. I see that "support" was added in Oct of 2012 (starting at commit 2ff8cc2), but I remember them having problems with discards + any resync operations. If Neil says it is ok, I will turn it on.
In the code that I'm looking at dm-raid.c definitely sets
num_discard_bios to 1 on something... As does dm-raid1.c. However,
Thanks for looking into it!
Post by Brassow Jonathan
brassow
_______________________________________________
linux-lvm mailing list
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
Brassow Jonathan
2014-09-24 04:17:42 UTC
Permalink
Post by Jarkko Oranen
Post by Brassow Jonathan
dm-raid.c does not have that - dm-raid1.c does. dm-raid1.c is a mirroring implementation specific to device-mapper. It is used by LVM for pvmove functionality (and is otherwise being slowly phased out). dm-raid.c is the device-mapper wrapper around the MD RAID personalities. It does not have discard support.
I will ask Neil what the current state of discard support is in MD. I see that "support" was added in Oct of 2012 (starting at commit 2ff8cc2), but I remember them having problems with discards + any resync operations. If Neil says it is ok, I will turn it on.
In the code that I'm looking at dm-raid.c definitely sets
num_discard_bios to 1 on something... As does dm-raid1.c. However,
Thanks for looking into it!
Heinz Mauelshagen is looking to get this included upstream now. Thanks for the poke! :)

Initial status query:
http://marc.info/?l=linux-raid&m=141047869821032&w=2

Patch posted for consideration:
https://www.redhat.com/archives/dm-devel/2014-September/msg00126.html

brassow

Loading...