Discussion:
[linux-lvm] Sorry to ask here ...
Georges Giralt
2017-02-23 19:25:26 UTC
Permalink
Hello !

I'm sorry to ask for help here, but I'm lost in the middle of nowhere ...

The context :

A PC hardware with an UEFI "BIOS" set to legacy boot only.

3 disks with 2 partitions each. The 3 first partitions are set in
software mirror (md0) and used for /boot (Ext4) .The second 3 partitions
are used as a software mirror and the resulting md1 is the sole PV of a
vg0 group onto which 8 logical volumes are set. This is used to run
Ubuntu 16.04.2 software (after many upgrades...). *

This setup has worked for years and seen changes in disk sizes, main
board, etc...

Lastly, I bet the cause is an upgrade of some packages, the machine
refuses to boot. It drops onto initramfs shell because the vg0 volume
group is not activated. Launching lvm and doing a vgchange -a y does the
trick and the system boots fine afterwards.

I've searched everything I can (but maybe the setup was so rock solid I
became lazy or unconscious) and tried to make again the initramfs with
no luck.

So every leads or advice would be greatly appreciated. Please, refrain
to hit me hard on the face. I'm already mourning....

Many thanks in advance for your help !


* : The 3 disks are here because at one time, long ago, I used mirroring
in the LVM software. ... I switched to software raid and kept the 3
disks as I had them into the box.
--
"If a man empties his purse into his head, no man can take it away from him.
An investment in knowledge always pays the best interest"


Benjamin Franklin.
Xen
2017-02-23 20:23:23 UTC
Permalink
Post by Georges Giralt
Lastly, I bet the cause is an upgrade of some packages, the machine
refuses to boot. It drops onto initramfs shell because the vg0 volume
group is not activated. Launching lvm and doing a vgchange -a y does
the trick and the system boots fine afterwards.
I don't know what is the cause, but...


The /boot is on the same partition so it should be no problem right.

In the initrd the root is opened but it fails?

The quickest thing to do is to go to:

/usr/share/initramfs-tools/scripts/local-top

And edit lvm2

You must add a call to "/sbin/lvm vgchange -ay" at the bottom. If this
works it will try to activate everything while still being in the
initrd.

Normally only root and swap are activated. I assume the mdX array will
be assembled just fine, but I am not sure. However I think the chance is
high that it would work.

There is a call in there I think to activate only $ROOT and $RESUME.
Maybe it will work if you just vgchange -ay.

If you can enter the system in a chroot and edit this file, and then
regenerate the initramfs, it will be included in the new initramfs.

If "lvm2" is upgraded this file will be overwritten.

I assume you can override it in
/etc/initramfs-tools/scripts/local-top/lvm2

You could need to copy lvm2 there and edit it there, for that:

mkdir -p /etc/initramfs-tools/scripts/local-top

cp /usr/share/initramfs-tools/scripts/local-top/lvm2
/etc/initramfs-tools/scripts/local-top

But I am not 100% sure of that.



In any case that would definitely add a call to "vgchange -ay" to the
initramfs, so that the entire volume group can be activated prior to
passing control to systemd.

Good luck.
Georges Giralt
2017-02-24 17:00:13 UTC
Permalink
Thank you Xen,

Your trick is working fine.

As far as I can tell, the two md are properly working during boot.

Now, I've to analyze the boot and update logs to try to find what was
the cause. But, the PC is booting fine. The wife won't complain. ;-) I'm
happy ;-)

Thank you.
Post by Xen
Post by Georges Giralt
Lastly, I bet the cause is an upgrade of some packages, the machine
refuses to boot. It drops onto initramfs shell because the vg0 volume
group is not activated. Launching lvm and doing a vgchange -a y does
the trick and the system boots fine afterwards.
I don't know what is the cause, but...
The /boot is on the same partition so it should be no problem right.
In the initrd the root is opened but it fails?
/usr/share/initramfs-tools/scripts/local-top
And edit lvm2
You must add a call to "/sbin/lvm vgchange -ay" at the bottom. If this
works it will try to activate everything while still being in the initrd.
Normally only root and swap are activated. I assume the mdX array will
be assembled just fine, but I am not sure. However I think the chance
is high that it would work.
There is a call in there I think to activate only $ROOT and $RESUME.
Maybe it will work if you just vgchange -ay.
If you can enter the system in a chroot and edit this file, and then
regenerate the initramfs, it will be included in the new initramfs.
If "lvm2" is upgraded this file will be overwritten.
I assume you can override it in
/etc/initramfs-tools/scripts/local-top/lvm2
mkdir -p /etc/initramfs-tools/scripts/local-top
cp /usr/share/initramfs-tools/scripts/local-top/lvm2
/etc/initramfs-tools/scripts/local-top
But I am not 100% sure of that.
In any case that would definitely add a call to "vgchange -ay" to the
initramfs, so that the entire volume group can be activated prior to
passing control to systemd.
Good luck.
_______________________________________________
linux-lvm mailing list
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
--
"If a man empties his purse into his head, no man can take it away from him.
An investment in knowledge always pays the best interest"


Benjamin Franklin.
Eric Ren
2017-03-02 05:44:13 UTC
Permalink
Hi,
Post by Georges Giralt
Hello !
I'm sorry to ask for help here, but I'm lost in the middle of nowhere ...
A PC hardware with an UEFI "BIOS" set to legacy boot only.
3 disks with 2 partitions each. The 3 first partitions are set in software mirror (md0)
and used for /boot (Ext4) .The second 3 partitions are used as a software mirror and the
resulting md1 is the sole PV of a vg0 group onto which 8 logical volumes are set. This is
used to run Ubuntu 16.04.2 software (after many upgrades...). *
This setup has worked for years and seen changes in disk sizes, main board, etc...
Lastly, I bet the cause is an upgrade of some packages, the machine refuses to boot. It
drops onto initramfs shell because the vg0 volume group is not activated. Launching lvm
and doing a vgchange -a y does the trick and the system boots fine afterwards.
I've searched everything I can (but maybe the setup was so rock solid I became lazy or
unconscious) and tried to make again the initramfs with no luck.
So every leads or advice would be greatly appreciated. Please, refrain to hit me hard on
the face. I'm already mourning....
Many thanks in advance for your help !
* : The 3 disks are here because at one time, long ago, I used mirroring in the LVM
software. ... I switched to software raid and kept the 3 disks as I had them into the box.
Looks very similar with these bugs:

https://bugzilla.opensuse.org/show_bug.cgi?id=964862
https://bugzilla.opensuse.org/show_bug.cgi?id=919284
https://bugzilla.opensuse.org/show_bug.cgi?id=1019886

You might be interested in having a try with this fix (attached):
"simplify-special-case-for-md-in-69-dm-lvm-metadata.patch"

This patch was originally discussed here - https://www.spinics.net/lists/raid/msg55182.html

But for some reason, it is not accepted by upstream:)

Hope it can help.

Eric
Loading...