Discussion:
[linux-lvm] Fails to create LVM volume on the top of RAID1 after upgrade lvm2 to v2.02.180
Gang He
2018-10-08 10:23:27 UTC
Permalink
Hello List

The system uses lvm based on raid1.
It seems that the PV of the raid1 is found also on the single disks that build the raid1 device:
[ 147.121725] linux-472a dracut-initqueue[391]: WARNING: PV qG1QRz-Ivm1-QVwq-uaHV-va9w-wwXh-lIIOhV on /dev/sda2 was already found on /dev/md1.
[ 147.123427] linux-472a dracut-initqueue[391]: WARNING: PV qG1QRz-Ivm1-QVwq-uaHV-va9w-wwXh-lIIOhV on /dev/sdb2 was already found on /dev/md1.
[ 147.369863] linux-472a dracut-initqueue[391]: WARNING: PV qG1QRz-Ivm1-QVwq-uaHV-va9w-wwXh-lIIOhV prefers device /dev/md1 because device size is correct.
[ 147.370597] linux-472a dracut-initqueue[391]: WARNING: PV qG1QRz-Ivm1-QVwq-uaHV-va9w-wwXh-lIIOhV prefers device /dev/md1 because device size is correct.
[ 147.371698] linux-472a dracut-initqueue[391]: Cannot activate LVs in VG vghome while PVs appear on duplicate devices.

This is a regression bug? since the user did not encounter this problem with lvm2 v2.02.177.


Thanks
Gang
David Teigland
2018-10-08 15:00:16 UTC
Permalink
Post by Gang He
Hello List
The system uses lvm based on raid1.
[ 147.121725] linux-472a dracut-initqueue[391]: WARNING: PV qG1QRz-Ivm1-QVwq-uaHV-va9w-wwXh-lIIOhV on /dev/sda2 was already found on /dev/md1.
[ 147.123427] linux-472a dracut-initqueue[391]: WARNING: PV qG1QRz-Ivm1-QVwq-uaHV-va9w-wwXh-lIIOhV on /dev/sdb2 was already found on /dev/md1.
[ 147.369863] linux-472a dracut-initqueue[391]: WARNING: PV qG1QRz-Ivm1-QVwq-uaHV-va9w-wwXh-lIIOhV prefers device /dev/md1 because device size is correct.
[ 147.370597] linux-472a dracut-initqueue[391]: WARNING: PV qG1QRz-Ivm1-QVwq-uaHV-va9w-wwXh-lIIOhV prefers device /dev/md1 because device size is correct.
[ 147.371698] linux-472a dracut-initqueue[391]: Cannot activate LVs in VG vghome while PVs appear on duplicate devices.
Do these warnings only appear from "dracut-initqueue"? Can you run and
send 'vgs -vvvv' from the command line? If they don't appear from the
command line, then is "dracut-initqueue" using a different lvm.conf?
lvm.conf settings can effect this (filter, md_component_detection,
external_device_info_source).
Post by Gang He
This is a regression bug? since the user did not encounter this problem with lvm2 v2.02.177.
It could be, since the new scanning changed how md detection works. The
md superblock version effects how lvm detects this. md superblock 1.0 (at
the end of the device) is not detected as easily as newer md versions
(1.1, 1.2) where the superblock is at the beginning. Do you know which
this is?
Gang He
2018-10-15 05:39:20 UTC
Permalink
Hell David,
Post by Gang He
Post by Gang He
Hello List
The system uses lvm based on raid1.
It seems that the PV of the raid1 is found also on the single disks that
[ 147.121725] linux-472a dracut-initqueue[391]: WARNING: PV
qG1QRz-Ivm1-QVwq-uaHV-va9w-wwXh-lIIOhV on /dev/sda2 was already found on
/dev/md1.
Post by Gang He
[ 147.123427] linux-472a dracut-initqueue[391]: WARNING: PV
qG1QRz-Ivm1-QVwq-uaHV-va9w-wwXh-lIIOhV on /dev/sdb2 was already found on
/dev/md1.
Post by Gang He
[ 147.369863] linux-472a dracut-initqueue[391]: WARNING: PV
qG1QRz-Ivm1-QVwq-uaHV-va9w-wwXh-lIIOhV prefers device /dev/md1 because device
size is correct.
Post by Gang He
[ 147.370597] linux-472a dracut-initqueue[391]: WARNING: PV
qG1QRz-Ivm1-QVwq-uaHV-va9w-wwXh-lIIOhV prefers device /dev/md1 because device
size is correct.
Post by Gang He
[ 147.371698] linux-472a dracut-initqueue[391]: Cannot activate LVs in VG
vghome while PVs appear on duplicate devices.
Do these warnings only appear from "dracut-initqueue"? Can you run and
send 'vgs -vvvv' from the command line? If they don't appear from the
command line, then is "dracut-initqueue" using a different lvm.conf?
lvm.conf settings can effect this (filter, md_component_detection,
external_device_info_source).
mdadm --detail --scan -vvv
/dev/md/linux:0:
Version : 1.0
Creation Time : Sun Jul 22 22:49:21 2012
Raid Level : raid1
Array Size : 513012 (500.99 MiB 525.32 MB)
Used Dev Size : 513012 (500.99 MiB 525.32 MB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent

Intent Bitmap : Internal

Update Time : Mon Jul 16 00:29:19 2018
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0

Consistency Policy : bitmap

Name : linux:0
UUID : 160998c8:7e21bcff:9cea0bbc:46454716
Events : 469

Number Major Minor RaidDevice State
0 8 17 0 active sync /dev/sdb1
1 8 33 1 active sync /dev/sdc1
/dev/md/linux:1:
Version : 1.0
Creation Time : Sun Jul 22 22:49:22 2012
Raid Level : raid1
Array Size : 1953000312 (1862.53 GiB 1999.87 GB)
Used Dev Size : 1953000312 (1862.53 GiB 1999.87 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent

Intent Bitmap : Internal

Update Time : Fri Oct 12 20:16:25 2018
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0

Consistency Policy : bitmap

Name : linux:1
UUID : 17426969:03d7bfa7:5be33b0b:8171417a
Events : 326248

Number Major Minor RaidDevice State
0 8 18 0 active sync /dev/sdb2
1 8 34 1 active sync /dev/sdc2

Thanks
Gang
Post by Gang He
Post by Gang He
This is a regression bug? since the user did not encounter this problem with
lvm2 v2.02.177.
It could be, since the new scanning changed how md detection works. The
md superblock version effects how lvm detects this. md superblock 1.0 (at
the end of the device) is not detected as easily as newer md versions
(1.1, 1.2) where the superblock is at the beginning. Do you know which
this is?
David Teigland
2018-10-15 15:26:48 UTC
Permalink
Post by Gang He
Post by Gang He
Post by Gang He
[ 147.371698] linux-472a dracut-initqueue[391]: Cannot activate LVs in VG
vghome while PVs appear on duplicate devices.
Do these warnings only appear from "dracut-initqueue"? Can you run and
send 'vgs -vvvv' from the command line? If they don't appear from the
command line, then is "dracut-initqueue" using a different lvm.conf?
lvm.conf settings can effect this (filter, md_component_detection,
external_device_info_source).
mdadm --detail --scan -vvv
Version : 1.0
It has the old superblock version 1.0 located at the end of the device, so
lvm will not always see it. (lvm will look for it when it's writing to
new devices to ensure it doesn't clobber an md component.)

(Also keep in mind that this md superblock is no longer recommended:
raid.wiki.kernel.org/index.php/RAID_superblock_formats)

There are various ways to make lvm handle this:

- allow_changes_with_duplicate_pvs=1
- external_device_info_source="udev"
- reject sda2, sdb2 in lvm filter
Post by Gang He
Post by Gang He
It could be, since the new scanning changed how md detection works. The
md superblock version effects how lvm detects this. md superblock 1.0 (at
the end of the device) is not detected as easily as newer md versions
(1.1, 1.2) where the superblock is at the beginning. Do you know which
this is?
Gang He
2018-10-17 05:16:28 UTC
Permalink
Hello David,
Post by David Teigland
Post by Gang He
Post by Gang He
Post by Gang He
[ 147.371698] linux-472a dracut-initqueue[391]: Cannot activate LVs in VG
vghome while PVs appear on duplicate devices.
Do these warnings only appear from "dracut-initqueue"? Can you run and
send 'vgs -vvvv' from the command line? If they don't appear from the
command line, then is "dracut-initqueue" using a different lvm.conf?
lvm.conf settings can effect this (filter, md_component_detection,
external_device_info_source).
mdadm --detail --scan -vvv
Version : 1.0
It has the old superblock version 1.0 located at the end of the device, so
lvm will not always see it. (lvm will look for it when it's writing to
new devices to ensure it doesn't clobber an md component.)
raid.wiki.kernel.org/index.php/RAID_superblock_formats)
- allow_changes_with_duplicate_pvs=1
- external_device_info_source="udev"
- reject sda2, sdb2 in lvm filter
There are some feedback as below from our user's environment (since I can not reproduce this problem in my local
environment).

I tested one by one option in the lvm.conf.

The good news - enabling
- external_device_info_source="udev"
- reject sda2, sdb2 in lvm filter

both work! The system enables the proper lvm raid1 device again.

The first option does not work.
systemctl status lvm2-***@9:126 results in:

● lvm2-***@9:126.service - LVM2 PV scan on device 9:126
Loaded: loaded (/usr/lib/systemd/system/lvm2-***@.service; static; vendor preset: disabled)
Active: failed (Result: exit-code) since Tue 2018-10-16 22:53:57 CEST; 3min 4s ago
Docs: man:pvscan(8)
Process: 849 ExecStart=/usr/sbin/lvm pvscan --cache --activate ay 9:126 (code=exited, status=5)
Main PID: 849 (code=exited, status=5)

Oct 16 22:53:57 linux-dnetctw lvm[849]: WARNING: Not using device /dev/md126 for PV
qG1QRz-Ivm1-QVwq-uaHV-va9w-wwXh-lIIOhV.
Oct 16 22:53:57 linux-dnetctw lvm[849]: WARNING: PV qG1QRz-Ivm1-QVwq-uaHV-va9w-wwXh-lIIOhV prefers device /dev/sdb2
because of previous preference.
Oct 16 22:53:57 linux-dnetctw lvm[849]: WARNING: PV qG1QRz-Ivm1-QVwq-uaHV-va9w-wwXh-lIIOhV prefers device /dev/sdb2
because of previous preference.
Oct 16 22:53:57 linux-dnetctw lvm[849]: device-mapper: reload ioctl on (254:0) failed: Device or resource busy
Oct 16 22:53:57 linux-dnetctw lvm[849]: device-mapper: reload ioctl on (254:0) failed: Device or resource busy
Oct 16 22:53:57 linux-dnetctw lvm[849]: 0 logical volume(s) in volume group "vghome" now active
Oct 16 22:53:57 linux-dnetctw lvm[849]: vghome: autoactivation failed.
Oct 16 22:53:57 linux-dnetctw systemd[1]: lvm2-***@9:126.service: Main process exited, code=exited,
status=5/NOTINSTALLED
Oct 16 22:53:57 linux-dnetctw systemd[1]: lvm2-***@9:126.service: Failed with result 'exit-code'.
Oct 16 22:53:57 linux-dnetctw systemd[1]: Failed to start LVM2 PV scan on device 9:126.

pvs shows:
/dev/sde: open failed: No medium found
WARNING: found device with duplicate /dev/sdc2
WARNING: found device with duplicate /dev/md126
WARNING: Disabling lvmetad cache which does not support duplicate PVs.
WARNING: Scan found duplicate PVs.
WARNING: Not using lvmetad because cache update failed.
/dev/sde: open failed: No medium found
WARNING: Not using device /dev/sdc2 for PV qG1QRz-Ivm1-QVwq-uaHV-va9w-wwXh-lIIOhV.
WARNING: Not using device /dev/md126 for PV qG1QRz-Ivm1-QVwq-uaHV-va9w-wwXh-lIIOhV.
WARNING: PV qG1QRz-Ivm1-QVwq-uaHV-va9w-wwXh-lIIOhV prefers device /dev/sdb2 because of previous preference.
WARNING: PV qG1QRz-Ivm1-QVwq-uaHV-va9w-wwXh-lIIOhV prefers device /dev/sdb2 because of previous preference.
PV VG Fmt Attr PSize PFree
/dev/sdb2 vghome lvm2 a-- 1.82t 202.52g

My questions are as follows,
1) why did the solution 1 not work? since the method looks more close to fix this problem.
2) Could we back-port some code from v2.02.177 source files to keep the compatibility? to avoid modifying some items
manually.
or, we have to accept this problem from v2.02.180 (maybe 178?) due to by-design?

Thanks
Gang
Post by David Teigland
Post by Gang He
Post by Gang He
It could be, since the new scanning changed how md detection works. The
md superblock version effects how lvm detects this. md superblock 1.0 (at
the end of the device) is not detected as easily as newer md versions
(1.1, 1.2) where the superblock is at the beginning. Do you know which
this is?
David Teigland
2018-10-17 14:10:25 UTC
Permalink
Post by Gang He
Post by David Teigland
- allow_changes_with_duplicate_pvs=1
- external_device_info_source="udev"
- reject sda2, sdb2 in lvm filter
1) why did the solution 1 not work? since the method looks more close to fix this problem.
Check if the version you are using has this commit:
https://sourceware.org/git/?p=lvm2.git;a=commit;h=09fcc8eaa8eb7fa4fcd7c6611bfbfb83f726ae38

If so, then I'd be interested to see the -vvvv output from that pvs command.
Post by Gang He
2) Could we back-port some code from v2.02.177 source files to keep the
compatibility? to avoid modifying some items manually. or, we have to
accept this problem from v2.02.180 (maybe 178?) due to by-design?
It's not clear to me exactly which code you're looking at backporting to
where.
David Teigland
2018-10-17 18:42:04 UTC
Permalink
Post by David Teigland
https://sourceware.org/git/?p=lvm2.git;a=commit;h=09fcc8eaa8eb7fa4fcd7c6611bfbfb83f726ae38
I see that this commit is missing from the stable branch:
https://sourceware.org/git/?p=lvm2.git;a=commit;h=3fd75d1bcd714b02fb2b843d1928b2a875402f37

I'll backport that one.

Dave
Gang He
2018-10-18 08:51:05 UTC
Permalink
Hello David,

Thank for your help.
If I include this patch in lvm2 v2.02.180,
LVM2 can activate LVs on the top of RAID1 automatically? or we still have to set "allow_changes_with_duplicate_pvs=1" in lvm.conf?


Thanks
Gang
Post by David Teigland
https://sourceware.org/git/?p=lvm2.git;a=commit;h=09fcc8eaa8eb7fa4fcd7c6611bf
bfb83f726ae38
https://sourceware.org/git/?p=lvm2.git;a=commit;h=3fd75d1bcd714b02fb2b843d19
28b2a875402f37
I'll backport that one.
Dave
David Teigland
2018-10-18 16:01:59 UTC
Permalink
Post by Gang He
If I include this patch in lvm2 v2.02.180,
LVM2 can activate LVs on the top of RAID1 automatically? or we still have to set "allow_changes_with_duplicate_pvs=1" in lvm.conf?
I didn't need any config changes when testing this myself, but there may
be other variables I've not encountered.
David Teigland
2018-10-18 17:59:23 UTC
Permalink
Post by David Teigland
Post by Gang He
If I include this patch in lvm2 v2.02.180,
LVM2 can activate LVs on the top of RAID1 automatically? or we still have to set "allow_changes_with_duplicate_pvs=1" in lvm.conf?
I didn't need any config changes when testing this myself, but there may
be other variables I've not encountered.
See these three commits:
d1b652143abc tests: add new test for lvm on md devices
e7bb50880901 scan: enable full md filter when md 1.0 devices are present
de2863739f2e scan: use full md filter when md 1.0 devices are present

at https://sourceware.org/git/?p=lvm2.git;a=shortlog;h=refs/heads/2018-06-01-stable

(I was wrong earlier; allow_changes_with_duplicate_pvs is not correct in
this case.)
Gang He
2018-10-19 05:42:16 UTC
Permalink
Hello David,

Thank for your attention.
I will let the user try these patches.

Thanks
Gang
Post by Gang He
Post by David Teigland
Post by Gang He
If I include this patch in lvm2 v2.02.180,
LVM2 can activate LVs on the top of RAID1 automatically? or we still have
to set "allow_changes_with_duplicate_pvs=1" in lvm.conf?
Post by David Teigland
I didn't need any config changes when testing this myself, but there may
be other variables I've not encountered.
d1b652143abc tests: add new test for lvm on md devices
e7bb50880901 scan: enable full md filter when md 1.0 devices are present
de2863739f2e scan: use full md filter when md 1.0 devices are present
at
https://sourceware.org/git/?p=lvm2.git;a=shortlog;h=refs/heads/2018-06-01-sta
ble
(I was wrong earlier; allow_changes_with_duplicate_pvs is not correct in
this case.)
Gang He
2018-10-23 02:19:57 UTC
Permalink
Hello David,

The user installed the lvm2 (v2.02.180) rpms with the below three patches, it looked there were still some problems in the user machine.
The feedback is as below from the user,

In a first round I installed lvm2-2.02.180-0.x86_64.rpm liblvm2cmd2_02-2.02.180-0.x86_64.rpm and liblvm2app2_2-2.02.180-0.x86_64.rpm - but no luck - after reboot still the same problem with ending up in the emergency console.
I additionally installed in the next round libdevmapper-event1_03-1.02.149-0.x86_64.rpm, ./libdevmapper1_03-1.02.149-0.x86_64.rpm and device-mapper-1.02.149-0.x86_64.rpm, again - ending up in the emergency console
systemctl status lvm2-***@9:126 output:
lvm2-***@9:126.service - LVM2 PV scan on device 9:126
Loaded: loaded (/usr/lib/systemd/system/lvm2-***@.service; static; vendor preset: disabled)
Active: failed (Result: exit-code) since Mon 2018-10-22 07:34:56 CEST; 5min ago
Docs: man:pvscan(8)
Process: 815 ExecStart=/usr/sbin/lvm pvscan --cache --activate ay 9:126 (code=exited, status=5)
Main PID: 815 (code=exited, status=5)

Oct 22 07:34:55 linux-dnetctw lvm[815]: WARNING: Autoactivation reading from disk instead of lvmetad.
Oct 22 07:34:56 linux-dnetctw lvm[815]: /dev/sde: open failed: No medium found
Oct 22 07:34:56 linux-dnetctw lvm[815]: WARNING: Not using device /dev/md126 for PV qG1QRz-Ivm1-QVwq-uaHV-va9w-wwXh-lIIOhV.
Oct 22 07:34:56 linux-dnetctw lvm[815]: WARNING: PV qG1QRz-Ivm1-QVwq-uaHV-va9w-wwXh-lIIOhV prefers device /dev/sdb2 because of previous preference.
Oct 22 07:34:56 linux-dnetctw lvm[815]: Cannot activate LVs in VG vghome while PVs appear on duplicate devices.
Oct 22 07:34:56 linux-dnetctw lvm[815]: 0 logical volume(s) in volume group "vghome" now active
Oct 22 07:34:56 linux-dnetctw lvm[815]: vghome: autoactivation failed.
Oct 22 07:34:56 linux-dnetctw systemd[1]: lvm2-***@9:126.service: Main process exited, code=exited, status=5/NOTINSTALLED
Oct 22 07:34:56 linux-dnetctw systemd[1]: lvm2-***@9:126.service: Failed with result 'exit-code'.
Oct 22 07:34:56 linux-dnetctw systemd[1]: Failed to start LVM2 PV scan on device 9:126.

What should we do in the next step for this case?
or we have to accept the fact, to modify the related configurations manually to work around.

Thanks
Gang
Post by Gang He
Post by David Teigland
Post by Gang He
If I include this patch in lvm2 v2.02.180,
LVM2 can activate LVs on the top of RAID1 automatically? or we still have
to set "allow_changes_with_duplicate_pvs=1" in lvm.conf?
Post by David Teigland
I didn't need any config changes when testing this myself, but there may
be other variables I've not encountered.
d1b652143abc tests: add new test for lvm on md devices
e7bb50880901 scan: enable full md filter when md 1.0 devices are present
de2863739f2e scan: use full md filter when md 1.0 devices are present
at
https://sourceware.org/git/?p=lvm2.git;a=shortlog;h=refs/heads/2018-06-01-sta
ble
(I was wrong earlier; allow_changes_with_duplicate_pvs is not correct in
this case.)
David Teigland
2018-10-23 15:04:36 UTC
Permalink
Post by Gang He
Process: 815 ExecStart=/usr/sbin/lvm pvscan --cache --activate ay 9:126 (code=exited, status=5)
Oct 22 07:34:56 linux-dnetctw lvm[815]: WARNING: Not using device /dev/md126 for PV qG1QRz-Ivm1-QVwq-uaHV-va9w-wwXh-lIIOhV.
Oct 22 07:34:56 linux-dnetctw lvm[815]: WARNING: PV qG1QRz-Ivm1-QVwq-uaHV-va9w-wwXh-lIIOhV prefers device /dev/sdb2 because of previous preference.
Oct 22 07:34:56 linux-dnetctw lvm[815]: Cannot activate LVs in VG vghome while PVs appear on duplicate devices.
I'd try disabling lvmetad, I've not been testing these with lvmetad on.
We may need to make pvscan read both the start and end of every disk to
handle these md 1.0 components, and I'm not sure how to do that yet
without penalizing every pvscan.

Dave
Gang He
2018-10-24 02:23:06 UTC
Permalink
Hello David,

I am sorry, I can not understand your reply quickly.
Post by Gang He
Post by Gang He
Process: 815 ExecStart=/usr/sbin/lvm pvscan --cache --activate ay 9:126
(code=exited, status=5)
Post by Gang He
Oct 22 07:34:56 linux-dnetctw lvm[815]: WARNING: Not using device
/dev/md126 for PV qG1QRz-Ivm1-QVwq-uaHV-va9w-wwXh-lIIOhV.
Post by Gang He
Oct 22 07:34:56 linux-dnetctw lvm[815]: WARNING: PV
qG1QRz-Ivm1-QVwq-uaHV-va9w-wwXh-lIIOhV prefers device /dev/sdb2 because
of previous preference.
Post by Gang He
Oct 22 07:34:56 linux-dnetctw lvm[815]: Cannot activate LVs in VG vghome
while PVs appear on duplicate devices.
I'd try disabling lvmetad, I've not been testing these with lvmetad on.
your means is, I should let the user disable lvmetad?
Post by Gang He
We may need to make pvscan read both the start and end of every disk to
handle these md 1.0 components, and I'm not sure how to do that yet
without penalizing every pvscan.
What can we do for now? it looks there needs add more code implement this logic.

Thanks
Gang
Post by Gang He
Dave
David Teigland
2018-10-24 14:47:36 UTC
Permalink
Post by Gang He
Post by Gang He
Post by Gang He
Process: 815 ExecStart=/usr/sbin/lvm pvscan --cache --activate ay 9:126
(code=exited, status=5)
Post by Gang He
Oct 22 07:34:56 linux-dnetctw lvm[815]: WARNING: Not using device
/dev/md126 for PV qG1QRz-Ivm1-QVwq-uaHV-va9w-wwXh-lIIOhV.
Post by Gang He
Oct 22 07:34:56 linux-dnetctw lvm[815]: WARNING: PV
qG1QRz-Ivm1-QVwq-uaHV-va9w-wwXh-lIIOhV prefers device /dev/sdb2 because
of previous preference.
Post by Gang He
Oct 22 07:34:56 linux-dnetctw lvm[815]: Cannot activate LVs in VG vghome
while PVs appear on duplicate devices.
I'd try disabling lvmetad, I've not been testing these with lvmetad on.
your means is, I should let the user disable lvmetad?
yes
Post by Gang He
Post by Gang He
We may need to make pvscan read both the start and end of every disk to
handle these md 1.0 components, and I'm not sure how to do that yet
without penalizing every pvscan.
What can we do for now? it looks there needs add more code implement this logic.
Excluding component devices in global_filter is always the most direct way
of solving problems like this. (I still hope to find a solution that
doesn't require that.)

Sven Eschenberg
2018-10-17 17:11:06 UTC
Permalink
Hi List,

Unfortunately I answered directly to Gang He earlier.

I'm seeing the exact same faulty behavior with 2.02.181:

WARNING: Not using device /dev/md126 for PV
4ZZuWE-VeJT-O3O8-rO3A-IQ6Y-M6hB-C3jJXo.
WARNING: PV 4ZZuWE-VeJT-O3O8-rO3A-IQ6Y-M6hB-C3jJXo prefers device
/dev/sda because of previous preference.
WARNING: Device /dev/sda has size of 62533296 sectors which is
smaller than corresponding PV size of 125065216 sectors. Was device resized?

So lvm decices to pull up the PV based on the component device metadata,
even though the raid is already up and running. Things worked as usual
with a .16* version.

Additionally I see:
/dev/sdj: open failed: No medium found
/dev/sdk: open failed: No medium found
/dev/sdl: open failed: No medium found
/dev/sdm: open failed: No medium found

In what crazy scenario would a removeable medium be part of an VG and
why in god's name would one even cosinder including removeable drives in
the scan as a default?

For the time being I added a filter, as this is the only workaround.
Funny enough, even though filtered, I am still getting the no medium
messages - this makes absolutely no sense at all.

Regards

-Sven
Post by David Teigland
Post by Gang He
Hello List
The system uses lvm based on raid1.
[ 147.121725] linux-472a dracut-initqueue[391]: WARNING: PV qG1QRz-Ivm1-QVwq-uaHV-va9w-wwXh-lIIOhV on /dev/sda2 was already found on /dev/md1.
[ 147.123427] linux-472a dracut-initqueue[391]: WARNING: PV qG1QRz-Ivm1-QVwq-uaHV-va9w-wwXh-lIIOhV on /dev/sdb2 was already found on /dev/md1.
[ 147.369863] linux-472a dracut-initqueue[391]: WARNING: PV qG1QRz-Ivm1-QVwq-uaHV-va9w-wwXh-lIIOhV prefers device /dev/md1 because device size is correct.
[ 147.370597] linux-472a dracut-initqueue[391]: WARNING: PV qG1QRz-Ivm1-QVwq-uaHV-va9w-wwXh-lIIOhV prefers device /dev/md1 because device size is correct.
[ 147.371698] linux-472a dracut-initqueue[391]: Cannot activate LVs in VG vghome while PVs appear on duplicate devices.
Do these warnings only appear from "dracut-initqueue"? Can you run and
send 'vgs -vvvv' from the command line? If they don't appear from the
command line, then is "dracut-initqueue" using a different lvm.conf?
lvm.conf settings can effect this (filter, md_component_detection,
external_device_info_source).
Post by Gang He
This is a regression bug? since the user did not encounter this problem with lvm2 v2.02.177.
It could be, since the new scanning changed how md detection works. The
md superblock version effects how lvm detects this. md superblock 1.0 (at
the end of the device) is not detected as easily as newer md versions
(1.1, 1.2) where the superblock is at the beginning. Do you know which
this is?
_______________________________________________
linux-lvm mailing list
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
Loading...