Discussion:
[linux-lvm] auto umount at thin meta 80%
Xen
2016-05-22 17:07:26 UTC
Permalink
I am not sure if, due to my recent posts ;-) I would still be allowed to
write here. But, perhaps it is important regardless.

I have an embedded LVM. The outer volume is a cached LVM, that is to
say, two volumes are cached from a different PV. One of the cached
volumes contains LUKS.

The LUKS, when opened, contains another PV, VG, and a thin pool.

The thin pool contains about 4-5 partitions and is overprovisioned. Only
one volume is in use now.

This thin volume called "Store" is almost the full size of the thin
pool, but not quite:

store vault Vwi-aotz-- 400.00g thin 40.37

It currently stores a backup, I was writing some backup scripts.

The volume apparently got umounted when the meta of the thinpool reached
80%.

10:47:21 lvm[4657]: WARNING: Thin pool vault-thin-tpool metadata is now
80.14% full.
10:47:21 lvm[4657]: Request to lookup VG vault in lvmetad gave response
Connection reset by peer.
10:47:21 lvm[4657]: Volume group "vault" not found
10:47:21 lvm[4657]: Failed to extend thin pool vault-thin-tpool.

Not sure why the discard is unsupported, but I guess I opened the LUKS
container without it:

[ 1033.399934] device-mapper: thin: Data device (dm-13) discard
unsupported: Disabling discard passdown.

[ 1171.656307] device-mapper: thin: 252:14 [ 14 == tpool ]: reached low
water mark for data device: sending event.

Suddenly my mounted filesystem was gone:

# mount | grep /mnt

....

And VIM reports:

E211: File "update.sh" no longer available

I still have handles open, so the thing must have been lazily umounted.

The meta was created (automatically) with only 112M:

thin vault twi-aotz-- 437.05g
[thin_tdata] vault Twi-ao---- 437.05g
[thin_tmeta] vault ewi-ao---- 112.00m

112M seems low for meta if a proportion of 1:1000 is usually required?

There are hardly any files on the volume(s), just a few archives
totaling currently 160G.

This is LVM version 2.02.133(2) (2015-10-30) / 1.02.110 (2015-10-30)

The instant umount at 80% seems rather premature.

I will recreate the volume and try to create a larger meta volume for
it.

I'm not sure if 0.1% of the data size will be enough though, given this
but wel'll see.

Regards.
Zdenek Kabelac
2016-05-23 08:40:09 UTC
Permalink
I am not sure if, due to my recent posts ;-) I would still be allowed to write
here. But, perhaps it is important regardless.
I have an embedded LVM. The outer volume is a cached LVM, that is to say, two
volumes are cached from a different PV. One of the cached volumes contains LUKS.
The LUKS, when opened, contains another PV, VG, and a thin pool.
The thin pool contains about 4-5 partitions and is overprovisioned. Only one
volume is in use now.
This thin volume called "Store" is almost the full size of the thin pool, but
store vault Vwi-aotz-- 400.00g thin 40.37
It currently stores a backup, I was writing some backup scripts.
The volume apparently got umounted when the meta of the thinpool reached 80%.
10:47:21 lvm[4657]: WARNING: Thin pool vault-thin-tpool metadata is now 80.14%
full.
10:47:21 lvm[4657]: Request to lookup VG vault in lvmetad gave response
Connection reset by peer.
10:47:21 lvm[4657]: Volume group "vault" not found
10:47:21 lvm[4657]: Failed to extend thin pool vault-thin-tpool.
At this moment - any failure in 'lvresize' execution leads to immediate
umount - tool went ballistic and so takes 'drastic' action to prevent further
damage.

See here again:

https://www.redhat.com/archives/linux-lvm/2016-May/msg00064.html

ATM there is some new code which unconditionally always connects to lvmetad
even it's not necessary at all - and it's potential source of many other
troubles - so fixes here are in progress - set 'use_lvmetad=0' if it's
still a problem in your case.

Regards

Zdenek
Xen
2016-05-24 16:16:27 UTC
Permalink
Post by Zdenek Kabelac
At this moment - any failure in 'lvresize' execution leads to
immediate umount - tool went ballistic and so takes 'drastic' action
to prevent further
damage.
I was more worried about why a default sized (auto-created) meta volume
would fill up almost completely after 3 hours of use.
Post by Zdenek Kabelac
https://www.redhat.com/archives/linux-lvm/2016-May/msg00064.html
That message is empty, there is nothing in it.

My computer automatically deletes it when messages such as these come
by.

It's just a failsafe against time wasting and all that.

Maybe LVM could learn from it, but then it deletes volumes maybe too :p.

Regards lol.
Post by Zdenek Kabelac
ATM there is some new code which unconditionally always connects to lvmetad
even it's not necessary at all - and it's potential source of many other
troubles - so fixes here are in progress - set 'use_lvmetad=0' if
it's still a problem in your case.
Right. Well I haven't been able to use my cache again, the SSD seems
faulty. But. I haven't utilized it to the same extent yet. But. It's now
at about 17% of the original value (I mean what was 100% back then).
It's just 4x the size. I can't have stuff like this happening you know.
I just created it at 4x the default size. Maybe it will work for a
while.

Regards.

Loading...