Now I think I figured out what's going on. It seems to be
Debian/Ubuntu specific, but I post here, maybe Debian/Ubuntu devs are
here and see this.
Bug report:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1396213
What actually makes the VG activation so long is that I have a
snapshot. Activating the snapshot takes very long, and bringing up the
entire VG takes about 5 minutes. This wouldn't be such a big problem,
as I could just patiently wait for the activation (with rootdelay).
But it seems something (maybe some kind of watchdog) kills vgchange
before it could finish bringing up all VGs. I had the fortune to boot
a developmental Vivid, and I've seen some 'watershed' messages stating
that 'vgchange' was killed because it was taking "too long". If we'd
let 'vgchange' to finish properly, I had the 2nd VG activated
properly, which contains my root FS.
It's a server. If it has a long boot time, so be it, it doesn't get
rebooted often anyway during normal circumstances. But it is required
to boot up without user interaction, e.g., when I issue a reboot
remotely. The main problem is that currently, user interaction is
necessary to pass initrd (as the root VG needs to be manually
activated), which means, I can only reboot the server when I'm
physically near.
Post by MegaBrutalNo, I don't have such kernel option.
But previously it was working without that. Aren't all volume groups
supposed to auto-activate, unless I set otherwise in lvm.conf?
I'll try this kernel option, however.
I have a "rootdelay" set to make initrd wait longer for the boot
device. With previous kernels, it worked, but now no matter how long I
set this value, the VG never activates. It only activates when I
manually activate it from the initrd prompt.
Post by Daniel SavardWhat are your kernel boot options? Do you specify the VGs you wish to
be activated at boot time there?
I have one entry like this one for each VG: rd.lvm.vg=vgname
-----------------
Daniel Savard
Post by MegaBrutalPost by Peter RajnohaPost by Peter RajnohaWhat's the exact lvm2 version used (lvm --version)?
LVM version: 2.02.98(2) (2012-10-15)
Library version: 1.02.77 (2012-10-15)
Driver version: 4.27.0
Is lvmetad enabled in your setup? (global/use_lvmetad=1 setting
in lvm.conf and lvmetad daemon running?)
use_lvmetad = 0
No such daemon is running.
This means that LV autoactivation is not enabled in that case too
(as it depends on lvmetad to be active) and there must a direct
call for the activation (vgchange/lvchange -ay/-aay)
However, most distributions do not use lvmetad in initrd anyway
(the only I know of at the moment is Arch Linux). As such, I think
this is a problem with distribution's initrd that is not waiting
properly for all PVs to show up and it calls LV activation prematurely.
I'd report your issue to your distribution's initrd component as each
distribution uses its own initrd scheme (I could help you with Fedora's
dracut initrd, but I don't see into Debian's/Ubuntu initrd scheme).
Post by Peter RajnohaDoes it activate when you run vgchange -aay vmdata-vg vmhost-vg
directly on the busybox cmd line?
The exact command I used to use in the BusyBox prompt is
lvm vgchange -ay vmhost-vg
Or, if I remember correctly, it activates simply by
lvm vgchange -ay
as well.
Then I exit the BusyBox prompt, and the boot process continues correctly.
My root FS is in vmhost-vg, and I have no idea why it doesn't come up
automatically.
Yeah, it all points to premature vgchange call in initrd's script.
Please, report this in your distribution's bug tracking system if
possible.
Thanks for the advice!
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1396213