Eric Ren
2017-12-28 10:42:07 UTC
Hi David,
I see there is a limitation on lvesizing the LV active on multiple node.
"""
limitations of lockd VGs
...
* resizing an LV that is active in the shared mode on multiple hosts
"""
It seems a big limitation to use lvmlockd in cluster:
"""
c1-n1:~ # lvresize -L-1G vg1/lv1
WARNING: Reducing active logical volume to 1.00 GiB.
THIS MAY DESTROY YOUR DATA (filesystem etc.)
Do you really want to reduce vg1/lv1? [y/n]: y
LV is already locked with incompatible mode: vg1/lv1
"""
Node "c1-n1" is the last node having vg1/lv1 active on it.
Can we change the lock mode from "shared" to "exclusive" to
lvresize without having to deactivate the LV on the last node?
It will reduce the availability if we have to deactivate LV on all
nodes to resize. Is there plan to eliminate this limitation in the
near future?
Regards,
Eric
I see there is a limitation on lvesizing the LV active on multiple node.
"""
limitations of lockd VGs
...
* resizing an LV that is active in the shared mode on multiple hosts
"""
It seems a big limitation to use lvmlockd in cluster:
"""
c1-n1:~ # lvresize -L-1G vg1/lv1
WARNING: Reducing active logical volume to 1.00 GiB.
THIS MAY DESTROY YOUR DATA (filesystem etc.)
Do you really want to reduce vg1/lv1? [y/n]: y
LV is already locked with incompatible mode: vg1/lv1
"""
Node "c1-n1" is the last node having vg1/lv1 active on it.
Can we change the lock mode from "shared" to "exclusive" to
lvresize without having to deactivate the LV on the last node?
It will reduce the availability if we have to deactivate LV on all
nodes to resize. Is there plan to eliminate this limitation in the
near future?
Regards,
Eric