Dec 2 07:09:46 vm1 LVM(vg)[1278]: ERROR: connect() failed on local socket: No such file or directory Internal cluster locking initialisation failed. WARNING: Falling back to local file-based locking. Volume Groups with the clustered attribute will be inaccessible. Reading all physical volumes. This may take a while... Found volume group "vm1-vg" using metadata type lvm2 Skipping clustered volume group vg
Dec 2 07:09:46 vm1 LVM(vg)[1278]: ERROR: connect() failed on local socket: No such file or directory Internal cluster locking initialisation failed. WARNING: Falling back to local file-based locking. Volume Groups with the clustered attribute will be inaccessible. Skipping clustered volume group vg
Dec 2 07:09:46 vm1 crmd[1075]: notice: process_lrm_event: LRM operation vg_start_0 (call=22, rc=1, cib-update=26, confirmed=true) unknown error
Dec 2 07:09:46 vm1 crmd[1075]: warning: status_from_rc: Action 23 (vg_start_0) on vm1 failed (target: 0 vs. rc: 1): Error
Dec 2 07:09:46 vm1 crmd[1075]: warning: update_failcount: Updating failcount for vg on vm1 after failed start: rc=1 (update=INFINITY, time=1480658986)
Dec 2 07:09:46 vm1 attrd[1073]: notice: attrd_trigger_update: Sending flush op to all hosts for: fail-count-vg (INFINITY)
If i do a cleanup of the ressource - it is started.
syslog attached.
Any help is greatly appreciated.
Thank you.
Gesendet: Mon 28 November 2016 23:14
Betreff: Re: [linux-lvm] auto_activation_volume_list in lvm.conf not honored
Post by Zdenek KabelacPost by Stefan BauerHi Peter,
as i said, we have master/slave setup _without_ concurrent write/read. So i do not see a reason why i should take care of locking as only one node is activating the volume group at the same time.
That should be fine - right?
Nope it's not.
Every i.e. activation DOES validation of all resources and takes ACTION
when something is wrong.
Sorry, but there is NO way to do this properly without locking manager.
Although many lvm2 users always do try to be 'innovative' and try to use in
lock-less way - this seems to work most of the time - till the moment some
disaster happens - then just lvm2 is blamed about data loss..
Interestingly they never tried to think why we invested so much time into
locking manager when there is such 'easy-fix' in their eyes...
IMHO lvmlockd is relatively 'low-resource/overhead' solution worth to be
explored if you don't like clvmd...
Stefan, as Zdenek points out, even reading VGs on shared storage is not
entirely safe, because lvm may attempt to fix/repair things on disk while
it is reading (this becomes more likely if one machine reads while another
is making changes). Using some kind of locking or clustering (lvmlockd or
clvm) is a solution.
Another fairly new option is to use "system ID", which assigns one host as
the owner of the VG. This avoids the problems mentioned above with
reading->fixing. But, system ID on its own cannot be used dynamically.
If you want to fail-over the VG between hosts, the system ID needs to be
changed, and this needs to be done carefully (e.g. by a resource manager
or something that takes fencing into account,
https://bugzilla.redhat.com/show_bug.cgi?id=1336346#c2)
Also https://www.redhat.com/archives/linux-lvm/2016-November/msg00022.html
Dave
_______________________________________________
linux-lvm mailing list
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/