Discussion:
[linux-lvm] Distributed Locking of LVM
Kalyana sundaram
2017-08-28 12:41:13 UTC
Permalink
We have a shared iscsi disk at all our boxes, with one vg. We are using
clvm to manage locks to create extend activate delete list.
CLVM is tedious to manage especially especially due to reboot if locking is
struck somewhere. Instead we are thinking of modifying file_locking.c to
get lock with distributed store like zookeeper or redis and on success run
the above command, on failure backoff and try again
The external ocking code will sit at _file_lock_resource, case LCK_VG.
Do you think its risky. Is there some other way people handle it?
--
Kalyanasundaram
http://blogs.eskratch.com/
https://github.com/kalyanceg/
Digimer
2017-08-29 07:05:17 UTC
Permalink
Post by Kalyana sundaram
We have a shared iscsi disk at all our boxes, with one vg. We are using
clvm to manage locks to create extend activate delete list.
CLVM is tedious to manage especially especially due to reboot if locking
is struck somewhere. Instead we are thinking of modifying file_locking.c
to get lock with distributed store like zookeeper or redis and on
success run the above command, on failure backoff and try again
The external ocking code will sit at _file_lock_resource, case LCK_VG.
Do you think its risky. Is there some other way people handle it?
I think you're trying to work around a problem, instead of solving it.
The only time locking blocks with DLM is if a node entered an unknown
state and fencing wasn't setup or failed (in which case hanging is the
least bad option available).

A lot of thought and design went into DLM / clvmd. I would be cautious
trying to reinvent a wheel this complex. Instead, do you have any
indication why blocking occurs? Do you have fencing setup and working?
What cluster stack/version/OS are you using?

You might want to post on the clusterlabs list, too.

Users: .... http://lists.clusterlabs.org/mailman/listinfo/users
Developers: http://lists.clusterlabs.org/mailman/listinfo/developers
--
Digimer
Papers and Projects: https://alteeve.com/w/
"I am, somehow, less interested in the weight and convolutions of
Einstein’s brain than in the near certainty that people of equal talent
have lived and died in cotton fields and sweatshops." - Stephen Jay Gould
Zdenek Kabelac
2017-08-29 09:25:19 UTC
Permalink
We have a shared iscsi disk at all our boxes, with one vg. We are using clvm
to manage locks to create extend activate delete list.
CLVM is tedious to manage especially especially due to reboot if locking is
struck somewhere. Instead we are thinking of modifying file_locking.c to get
lock with distributed store like zookeeper or redis and on success run the
above command, on failure backoff and try again
The external ocking code will sit at _file_lock_resource, case LCK_VG.
Do you think its risky. Is there some other way people handle it?
Hi

Have you considered usage of 'lvmlockd' and it's sanlock interface
(where locks are maintained on shared storage device).

Regards

Zdenek
David Teigland
2017-08-29 16:23:40 UTC
Permalink
Post by Zdenek Kabelac
Post by Kalyana sundaram
We have a shared iscsi disk at all our boxes, with one vg. We are using
clvm to manage locks to create extend activate delete list.
CLVM is tedious to manage especially especially due to reboot if locking
is struck somewhere. Instead we are thinking of modifying file_locking.c
to get lock with distributed store like zookeeper or redis and on
success run the above command, on failure backoff and try again
The external ocking code will sit at _file_lock_resource, case LCK_VG.
Do you think its risky. Is there some other way people handle it?
Hi
Have you considered usage of 'lvmlockd' and it's sanlock interface
(where locks are maintained on shared storage device).
And lvmlockd can use different lock managers. Currently it supports
sanlock (where the locks are kept in the VG's storage and networking is
not used), or the dlm (which uses the network and requires corosync). You
also could try adding another lock manager option to lvmlockd.

See here for more info:
http://man7.org/linux/man-pages/man8/lvmlockd.8.html

I think people will find lvmlockd works much better than clvm.
https://www.redhat.com/archives/linux-lvm/2016-November/msg00022.html
Kalyana sundaram
2017-08-31 06:32:11 UTC
Permalink
Thanks all people
I understand reboot/fencing is mandatory
I hope the visibility might be better in external locking tool like redis
With lvmlockd I find no deb available for ubuntu, and documentations for
clvm to handle an issue is difficult to find
Could somebody redirect me for a good documentation on clvm?
Let me compile lvmlockd and give it a shot
Post by Kalyana sundaram
Post by Zdenek Kabelac
Post by Kalyana sundaram
We have a shared iscsi disk at all our boxes, with one vg. We are using
clvm to manage locks to create extend activate delete list.
CLVM is tedious to manage especially especially due to reboot if
locking
Post by Zdenek Kabelac
Post by Kalyana sundaram
is struck somewhere. Instead we are thinking of modifying
file_locking.c
Post by Zdenek Kabelac
Post by Kalyana sundaram
to get lock with distributed store like zookeeper or redis and on
success run the above command, on failure backoff and try again
The external ocking code will sit at _file_lock_resource, case LCK_VG.
Do you think its risky. Is there some other way people handle it?
Hi
Have you considered usage of 'lvmlockd' and it's sanlock interface
(where locks are maintained on shared storage device).
And lvmlockd can use different lock managers. Currently it supports
sanlock (where the locks are kept in the VG's storage and networking is
not used), or the dlm (which uses the network and requires corosync). You
also could try adding another lock manager option to lvmlockd.
http://man7.org/linux/man-pages/man8/lvmlockd.8.html
I think people will find lvmlockd works much better than clvm.
https://www.redhat.com/archives/linux-lvm/2016-November/msg00022.html
--
Kalyanasundaram
http://blogs.eskratch.com/
https://github.com/kalyanceg/
Zdenek Kabelac
2017-08-31 08:08:47 UTC
Permalink
Post by Kalyana sundaram
Thanks all people
I understand reboot/fencing is mandatory
I hope the visibility might be better in external locking tool like redis
With lvmlockd I find no deb available for ubuntu, and documentations for clvm
to handle an issue is difficult to find
Unfortunately Debian based distros are not 'the best fit' for anything lvm2
related and it's not lvm2 fault ;) and it's pretty hard to fix....

Maybe you can try some other distro ?
Or eventually building things from source ?


Regards


Zdenek
John Stoffel
2017-08-31 19:29:16 UTC
Permalink
Post by Kalyana sundaram
Thanks all people
I understand reboot/fencing is mandatory
I hope the visibility might be better in external locking tool like redis
With lvmlockd I find no deb available for ubuntu, and documentations for clvm
to handle an issue is difficult to find
Zdenek> Unfortunately Debian based distros are not 'the best fit' for
Zdenek> anything lvm2 related and it's not lvm2 fault ;) and it's
Zdenek> pretty hard to fix....

Can you please elaborate on this? I use Debian distros by
preference. Is it because they've changed stuff to fit the debian
ideals too much?

John
Peter Rajnoha
2017-09-01 07:37:49 UTC
Permalink
Post by John Stoffel
Post by Kalyana sundaram
Thanks all people
I understand reboot/fencing is mandatory
I hope the visibility might be better in external locking tool like redis
With lvmlockd I find no deb available for ubuntu, and documentations for clvm
to handle an issue is difficult to find
Zdenek> Unfortunately Debian based distros are not 'the best fit' for
Zdenek> anything lvm2 related and it's not lvm2 fault ;) and it's
Zdenek> pretty hard to fix....
Can you please elaborate on this? I use Debian distros by
preference. Is it because they've changed stuff to fit the debian
ideals too much?
It's more about the attitude of the lvm2 package maintenance in Debian.

There were times when certain things didn't work quite well because
there were changes done apart from upstream release and they were not
correct, causing various problems (e.g. Debian used changed udev rules
for some time which had some issues and even if upstream warned about
these issues right away, it took some time to fix them - by not using
these extra changes apart from upstream). Also, there were certain
decisions done for Debian's lvm2 package in the past like removing the
clvmd altogether from the package, not taking into account there might
be active users still and not providing any alternative properly (the
clvmd is now back).

So it's all about these issues and decisions which make lvm2 in Debian
not that trustworthy, at least not from upstream's perspective.

Things may have changed and improved these days, but upstream is now
very cautious when it comes to recommending Debian for lvm2 usage (that
also implies Ubuntu usage as the packages are somewhat inherited from
Debian there).
--
Peter
Loading...