Damon Wang
2018-09-25 10:18:53 UTC
Hi,
AFAIK once sanlock can not access lease storage, it will run
"kill_vg" to lvmlockd, and the standard process should be deactivate
logical volumes and drop vg locks.
But sometimes the storage will recovery after kill_vg(and before we
deactivate or drop lock), and then it will prints "storage failed for
sanlock leases" on lvm commands like this:
[***@dev1-2 ~]# vgck 71b1110c97bd48aaa25366e2dc11f65f
WARNING: Not using lvmetad because config setting use_lvmetad=0.
WARNING: To avoid corruption, rescan devices to make changes visible
(pvscan --cache).
VG 71b1110c97bd48aaa25366e2dc11f65f lock skipped: storage failed for
sanlock leases
Reading VG 71b1110c97bd48aaa25366e2dc11f65f without a lock.
so what should I do to recovery this, (better) without affect
volumes in using?
I find a way but it seems very tricky: save "lvmlockctl -i" output,
run lvmlockctl -r vg and then activate volumes as the previous output.
Do we have an "official" way to handle this? Since it is pretty
common that when I find lvmlockd failed, the storage has already
recovered.
Thanks,
Damon Wang
AFAIK once sanlock can not access lease storage, it will run
"kill_vg" to lvmlockd, and the standard process should be deactivate
logical volumes and drop vg locks.
But sometimes the storage will recovery after kill_vg(and before we
deactivate or drop lock), and then it will prints "storage failed for
sanlock leases" on lvm commands like this:
[***@dev1-2 ~]# vgck 71b1110c97bd48aaa25366e2dc11f65f
WARNING: Not using lvmetad because config setting use_lvmetad=0.
WARNING: To avoid corruption, rescan devices to make changes visible
(pvscan --cache).
VG 71b1110c97bd48aaa25366e2dc11f65f lock skipped: storage failed for
sanlock leases
Reading VG 71b1110c97bd48aaa25366e2dc11f65f without a lock.
so what should I do to recovery this, (better) without affect
volumes in using?
I find a way but it seems very tricky: save "lvmlockctl -i" output,
run lvmlockctl -r vg and then activate volumes as the previous output.
Do we have an "official" way to handle this? Since it is pretty
common that when I find lvmlockd failed, the storage has already
recovered.
Thanks,
Damon Wang