Discussion:
[linux-lvm] Unable to un-cache logical volume when chunk size is over 1MiB
Ryan Launchbury
2018-06-20 09:18:56 UTC
Permalink
Hello,

I'm having a problem uncaching logical volumes when the cache data
chunck size is over 1MiB.
The process I'm using to uncache is: lvconvert --uncache vg/lv


The issue occurs across multiple systems with different hardware and
different versions of LVM.

Steps to reproduce:

1. Create origin VG & LV
2. Add cache device over 1TB to the origin VG
3. Create the cache data lv:
lvcreate -n cachedata -L 1770GB cached_vg /dev/nvme0n1
4. Create the cache metadata lv:
lvcreate -n cachemeta -L 1770MB cached_vg /dev/nvme0n1
5. Convert to a cache pool:
lvconvert --type cache-pool --cachemode writethrough --poolmetadata
cached_vg/cachemeta cached_vg/cachedata
6. Enable caching on the origin LVM:
lvconvert --type cache --cachepool cached_vg/cachedata
cached_vg/filestore01
7. Write some data to the main LV so as the cache device is used:
dd if=/dev/zero of=/mnt/filestore01/test.dat bs=1M count=10000
8. Check the cache stats:
lvs -a -o +cache_total_blocks,cache_used_blocks,cache_dirty_blocks
9. Repeating step 8 over time will show that the dirty blocks are not
being written back at all
10. Try to uncache the device:
lvconvert --uncache cached_vg/filestore01
11. You will get a repeating message. This will loop indefinitely and
not decrease or complete:
Flushing x blocks for cache cached_vg/filestore01.

After testing multiple times, the issue seems to be tied to the chunk
size selected in step 5. The LVM man page mentions that the chunk must
be a multiple of 32KiB, however the next chunk size automatically
assigned over 1MiB is usually 1.03MiB. With a chunk size of 1.03MiB or
higher, the cache is not able to flush. Creating a cache device with a
chunk size of 1MiB or less, the cache is flushable.

Now knowing how to avoid the issue, I just need to be able to safely
un-cache systems with do have a cache that will not flush.

Details:

Version info from lvm version:

LVM version: 2.02.171(2)-RHEL7 (2017-05-03)
Library version: 1.02.140-RHEL7 (2017-05-03)
Driver version: 4.35.0
Configuration: ./configure --build=x86_64-redhat-linux-gnu
--host=x86_64-redhat-linux-gnu --program-prefix=
--disable-dependency-tracking --prefix=/usr --exec-prefix=/usr
--bindir=/usr/bin --sbindir=/usr/sbin --sysconfdir=/etc
--datadir=/usr/share --includedir=/usr/include --libdir=/usr/lib64
--libexecdir=/usr/libexec --localstatedir=/var --sharedstatedir=/var/lib
--mandir=/usr/share/man --infodir=/usr/share/info
--with-default-dm-run-dir=/run --with-default-run-dir=/run/lvm
--with-default-pid-dir=/run --with-default-locking-dir=/run/lock/lvm
--with-usrlibdir=/usr/lib64 --enable-lvm1_fallback --enable-fsadm
--with-pool=internal --enable-write_install --with-user= --with-group=
--with-device-uid=0 --with-device-gid=6 --with-device-mode=0660
--enable-pkgconfig --enable-applib --enable-cmdlib --enable-dmeventd
--enable-blkid_wiping --enable-python2-bindings --with-cluster=internal
--with-clvmd=corosync --enable-cmirrord
--with-udevdir=/usr/lib/udev/rules.d --enable-udev_sync
--with-thin=internal --enable-lvmetad --with-cache=internal
--enable-lvmpolld --enable-lvmlockd-dlm --enable-lvmlockd-sanlock
--enable-dmfilemapd

System info:
System 1,2,3:
- Dell R730XD server
- 12x disk in RAID 6 to onboard PERC/Megaraid controller

System 4:
-Dell R630 server
-60x Disk (6 luns) in RAID 6 to PCI megaraid controller

The systems are currently in production, so it's quite hard for me to
change the configuration to enable logging.

Any assistance would be much appreciated! If any more info is needed
please let me know.
Best regards,
Ryan
Zdenek Kabelac
2018-06-20 10:15:55 UTC
Permalink
Post by Ryan Launchbury
Hello,
I'm having a problem uncaching logical volumes when the cache data chunck size
is over 1MiB.
The process I'm using to uncache is: lvconvert --uncache vg/lv
The issue occurs across multiple systems with different hardware and different
versions of LVM.
1. Create origin VG & LV
2. Add cache device over 1TB to the origin VG
lvcreate -n cachedata -L 1770GB cached_vg /dev/nvme0n1
lvcreate -n cachemeta -L 1770MB cached_vg /dev/nvme0n1
lvconvert --type cache-pool --cachemode writethrough --poolmetadata
cached_vg/cachemeta cached_vg/cachedata
lvconvert --type cache --cachepool cached_vg/cachedata cached_vg/filestore01
dd if=/dev/zero of=/mnt/filestore01/test.dat bs=1M count=10000
lvs -a -o +cache_total_blocks,cache_used_blocks,cache_dirty_blocks
9. Repeating step 8 over time will show that the dirty blocks are not being
written back at all
lvconvert --uncache cached_vg/filestore01
11. You will get a repeating message. This will loop indefinitely and not
Flushing x blocks for cache cached_vg/filestore01.
After testing multiple times, the issue seems to be tied to the chunk size
selected in step 5. The LVM man page mentions that the chunk must be a
multiple of 32KiB, however the next chunk size automatically assigned over
1MiB is usually 1.03MiB. With a chunk size of 1.03MiB or higher, the cache is
not able to flush. Creating a cache device with a chunk size of 1MiB or less,
the cache is flushable.
Now knowing how to avoid the issue, I just need to be able to safely un-cache
systems with do have a cache that will not flush.
LVM version:     2.02.171(2)-RHEL7 (2017-05-03)
  Library version: 1.02.140-RHEL7 (2017-05-03)
  Driver version:  4.35.0
What is the kernel version and Linux distro in use ?
Post by Ryan Launchbury
- Dell R730XD server
- 12x disk in RAID 6 to onboard PERC/Megaraid controller
-Dell R630 server
-60x Disk (6 luns) in RAID 6 to PCI megaraid controller
The systems are currently in production, so it's quite hard for me to change
the configuration to enable logging.
Any assistance would be much appreciated! If any more info is needed please
let me know.
Hi

Aren't there any kernel write errors in your 'dmegs'.
LV becomes fragile if the associated devices with cache are having HW issues
(disk read/write errors)

Zdenek
Ryan Launchbury
2018-06-20 11:10:02 UTC
Permalink
Hi Zdenek,

Kernel is: Linux 3.10.0-693.21.1.el7.x86_64
Distro is: Centos 7 - Linux release 7.4.1708
Post by Zdenek Kabelac
Post by Ryan Launchbury
Hello,
I'm having a problem uncaching logical volumes when the cache data
chunck size is over 1MiB.
The process I'm using to uncache is: lvconvert --uncache vg/lv
The issue occurs across multiple systems with different hardware and
different versions of LVM.
1. Create origin VG & LV
2. Add cache device over 1TB to the origin VG
lvcreate -n cachedata -L 1770GB cached_vg /dev/nvme0n1
lvcreate -n cachemeta -L 1770MB cached_vg /dev/nvme0n1
lvconvert --type cache-pool --cachemode writethrough --poolmetadata
cached_vg/cachemeta cached_vg/cachedata
lvconvert --type cache --cachepool cached_vg/cachedata
cached_vg/filestore01
dd if=/dev/zero of=/mnt/filestore01/test.dat bs=1M count=10000
lvs -a -o +cache_total_blocks,cache_used_blocks,cache_dirty_blocks
9. Repeating step 8 over time will show that the dirty blocks are not being
written back at all
lvconvert --uncache cached_vg/filestore01
11. You will get a repeating message. This will loop indefinitely and not
Flushing x blocks for cache cached_vg/filestore01.
After testing multiple times, the issue seems to be tied to the chunk
size selected in step 5. The LVM man page mentions that the chunk
must be a multiple of 32KiB, however the next chunk size
automatically assigned over 1MiB is usually 1.03MiB. With a chunk
size of 1.03MiB or higher, the cache is not able to flush. Creating a
cache device with a chunk size of 1MiB or less, the cache is flushable.
Now knowing how to avoid the issue, I just need to be able to safely
un-cache systems with do have a cache that will not flush.
LVM version: 2.02.171(2)-RHEL7 (2017-05-03)
Library version: 1.02.140-RHEL7 (2017-05-03)
Driver version: 4.35.0
What is the kernel version and Linux distro in use ?
Post by Ryan Launchbury
- Dell R730XD server
- 12x disk in RAID 6 to onboard PERC/Megaraid controller
-Dell R630 server
-60x Disk (6 luns) in RAID 6 to PCI megaraid controller
The systems are currently in production, so it's quite hard for me to
change the configuration to enable logging.
Any assistance would be much appreciated! If any more info is needed
please let me know.
Hi
Aren't there any kernel write errors in your 'dmegs'.
LV becomes fragile if the associated devices with cache are having HW
issues (disk read/write errors)
Zdenek
Nope, no write errors in /var/log/dmesg. The last log entry was at
10.871493 and the system has been on for 61 days.

Best regards,
Ryan
Gionatan Danti
2018-06-22 18:13:24 UTC
Permalink
Post by Zdenek Kabelac
Hi
Aren't there any kernel write errors in your 'dmegs'.
LV becomes fragile if the associated devices with cache are having HW
issues (disk read/write errors)
Zdenek
Is that true even when using a writethrough cache mode?
--
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: ***@assyoma.it - ***@assyoma.it
GPG public key ID: FF5F32A8
Zdenek Kabelac
2018-06-22 20:07:11 UTC
Permalink
Post by Gionatan Danti
Post by Zdenek Kabelac
Hi
Aren't there any kernel write errors in your 'dmegs'.
LV becomes fragile if the associated devices with cache are having HW
issues (disk read/write errors)
Zdenek
Is that true even when using a writethrough cache mode?
With writethrough - all writes are first committed on 'origin' disk - before
they are ack back to writing apps - so cache can be through away anytime.

When cache will experience write error - it will become invalidate and will
need to be dropped - but this thing is not automated ATM - so admin
works is needed to handle this task.

Regards

Zdenek
Gionatan Danti
2018-06-23 10:09:21 UTC
Permalink
Post by Zdenek Kabelac
When cache will experience write error - it will become invalidate and will
need to be dropped - but this thing is not automated ATM - so admin
works is needed to handle this task.
So, if a writethrough cache experience write errors but the
administrator is not able to immediately intervene to drop the cache,
what problem can arise? Stale reads? Slow performance?

What about cache *read* error? Is the read simply redirected to the
underlying slow/main volume?

Thanks.
--
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: ***@assyoma.it - ***@assyoma.it
GPG public key ID: FF5F32A8
Ryan Launchbury
2018-06-24 19:18:49 UTC
Permalink
Hi Zdenek and Gionatan,

Thanks for your reply's.
Something else of note: on systems which are unable to flush the cache,
data is still being written to the origin LV somehow, because there is
200TB of data in the LV, but the cache is only 1.8TB, so somehow it is
working. However when running any commands to flush the cache, or uncache,
it seems unable to.

What sort of admin work needs to be done/can be done to force the flush and
remove the cache?
I've tried the cleaner policy, however, it doesn't seem to be flushing
anything.

In testing, forcibly removing the cache, via editing the LVM config file
has caused extensive XFS filesystem corruption, even when backing up the
metadata first and restoring after the cache device is missing. Any advice
on how to safely uncache the volume would be massively appreciated.

Please let me know if you need any more logs or data.
Best regards,
Ryan
Post by Gionatan Danti
Post by Zdenek Kabelac
When cache will experience write error - it will become invalidate and will
need to be dropped - but this thing is not automated ATM - so admin
works is needed to handle this task.
So, if a writethrough cache experience write errors but the
administrator is not able to immediately intervene to drop the cache,
what problem can arise? Stale reads? Slow performance?
What about cache *read* error? Is the read simply redirected to the
underlying slow/main volume?
Thanks.
--
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
GPG public key ID: FF5F32A8
Gionatan Danti
2018-06-25 17:19:01 UTC
Permalink
Post by Ryan Launchbury
In testing, forcibly removing the cache, via editing the LVM config
file has caused extensive XFS filesystem corruption, even when backing
up the metadata first and restoring after the cache device is missing.
Any advice on how to safely uncache the volume would be massively
appreciated.
It is my understanding that a writethrough cache should *never* have any
data that are not on the backing volumes already.
In other words, forcibly removing a writethough cache (ie: disconnetting
the physical cache device) should not cause any harms to
filesystem/data.

Can you show the output of "dmsetup table"?
Thanks.
--
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: ***@assyoma.it - ***@assyoma.it
GPG public key ID: FF5F32A8
Gionatan Danti
2018-06-25 17:40:38 UTC
Permalink
Post by Ryan Launchbury
Hi Gionatan,
The system with the issue is with writeback cache mode enabled.
Best regards,
Ryan
Ah, I was under the impression that it was a writethough cache.
Sorry for the noise.
--
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: ***@assyoma.it - ***@assyoma.it
GPG public key ID: FF5F32A8
Ryan Launchbury
2018-07-18 14:25:10 UTC
Permalink
Hi all,

Does anyone have any other ideas or potential workarounds for this issue?

Please let me know if you require more info.

Best regards,
Ryan
Post by Gionatan Danti
Post by Ryan Launchbury
Hi Gionatan,
The system with the issue is with  writeback cache mode enabled.
Best regards,
Ryan
Ah, I was under the impression that it was a writethough cache.
Sorry for the noise.
Douglas Paul
2018-07-18 14:58:10 UTC
Permalink
Post by Ryan Launchbury
Does anyone have any other ideas or potential workarounds for this issue?
Please let me know if you require more info.
I didn't see this in the previous messages, but have you tried temporarily
remounting the filesystems read-only if you can't unmount them?

Also, earlier you mentioned looking at /var/log/dmesg for write errors. On
my systems, that only contains a snapshot at boot. I have to run dmesg to
see the latest updates. It seems suspicious to have nothing in the dmesg
during 61 days past 10 seconds after system boot ...
--
Douglas Paul
Ryan Launchbury
2018-06-22 19:22:10 UTC
Permalink
Hi Gionatan,

My development system is out on rental at the moment. I'll check that for
you as soon as I can.

Best regards,
Ryan
Post by Gionatan Danti
Post by Zdenek Kabelac
Hi
Aren't there any kernel write errors in your 'dmegs'.
LV becomes fragile if the associated devices with cache are having HW
issues (disk read/write errors)
Zdenek
Is that true even when using a writethrough cache mode?
--
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
GPG public key ID: FF5F32A8
--
*Ryan Launchbury*

System Engineer
magenta broadcast



magenta.tv

mob: 07939 276 897
support: 020 8050 1920

office: 020 8050 1080
Loading...