Discussion:
[linux-lvm] Having duplicate PV problems, think there's a bug in LVM2 md component detection
Ron Watkins
20 years ago
Permalink
I'm sorry if this is a FAQ or if I'm being stupid. I saw some mentions to
this problem on the old mailing list, but it didn't seem to quite cover what
I'm seeing, and I don't see an archive for this list yet. (and what on
earth happened to the old list, anyway?)

My problem is this: I'm setting up a software RAID5 across 5 IDE drives.
I'm running Debian Unstable, using kernel 2.6.8-2-k7. I HAVE set
md_component_detection to 1 in lvm.conf, and I wiped the drives after
changing this setting.

I originally set it up as a four-drive RAID, via a 3Ware controller, so my
original devices were sdb, sdc, sdd, and sde. (the machine also has a
hardware raid on an ICP Vortex SCSI controller: this is sda.) In this
mode, it set up and built perfectly. LVM worked exactly as I expected it
to. I had a test volume running. All the queries and volume management
worked exactly correctly. All was well.

So then I tried to add one more drive via the motherboard IDE controller, on
/dev/hda. (note that I stopped the array, wiped the first and last 100 megs
on the drives, and rebuilt. ). That's when the problems started. The RAID
itself seems to build and work just fine, although I haven't waited for the
entire 6 or so hours it will take to completely finish. Build speed is
good, everything seems normal. But LVM blows up badly in this
configuration.

When I do a pvcreate on /dev/md0, it succeeds... but if I do a pvdisplay I
get a bunch of complaints:

jeeves:/etc/lvm# pvdisplay
Found duplicate PV y8pYTtAg0W703Sc8Wiy79mcWU3gHmCFc: using /dev/sde not
/dev/hda
Found duplicate PV y8pYTtAg0W703Sc8Wiy79mcWU3gHmCFc: using /dev/sde not
/dev/hda
--- NEW Physical volume ---
PV Name /dev/hda
VG Name
PV Size 931.54 GB
Allocatable NO
PE Size (KByte) 0
Total PE 0
Free PE 0
Allocated PE 0
PV UUID y8pYTt-Ag0W-703S-c8Wi-y79m-cWU3-gHmCFc

It seems to think that /dev/hda is where the PV is, rather than /dev/md0.

(Note, again, I *HAVE* turned the md_component_detection to 1 in lvm.conf!!)

I have erased, using dd, the first and last 100 megs or so on every drive,
and I get exactly the same results every time... even with all RAID and LVM
blocks erased, if I use this list of drives:

/dev/hda
/dev/sdb
/dev/sdc
/dev/sdd
/dev/sde

with the linux MD driver, LVM does not seem to work properly. I think the
component detection is at least a little buggy. This is what my
/proc/mdstat looks like:

jeeves:/etc/lvm# cat /proc/mdstat
Personalities : [raid5]
md0 : active raid5 sde[5] sdd[3] sdc[2] sdb[1] hda[0]
976793600 blocks level 5, 128k chunk, algorithm 2 [5/4] [UUUU_]
[=>...................] recovery = 6.2% (15208576/244198400)
finish=283.7min speed=13448K/sec
unused devices: <none>

I realize that using both IDE and SCSI drives in the same array is
unusual... but I'm not really using SCSI drives, they just look like that
because of the 3Ware controller.

Again, this works FINE as long as I just use the (fake) SCSI devices.. it
doesn't wonk out until I add in /dev/hda.

Any suggestions? Is this a bug?
go0ogl3
20 years ago
Permalink
I'm anly a begginer at lvm but...
...
I think you have 2 PV's with the same UUID and that's the problem. You
can even move the drives letters around (hda or sda) as I think it
does not matter for lvm. The only thing it counts it's the "UUID" of
the PV.

You should use pvcreate again on /dev/hda so your last added drive
should have different UUID.
...
Ron Watkins
20 years ago
Permalink
Yes, I have two PVs with the same UUID. The problem is that these PVs are
COMPONENTS OF an MD device. So when pvcreate writes its superblock to md0,
it gets mirrored onto all the components of md0. Later runs elicit much
complaining because /dev/hda and several /dev/sd devices now have the 'same'
UUID. They're SUPPOSED to, they're components of a RAID.

LVM is supposed to detect this situation and not do that, but it doesn't
seem to be working for me.

<<RON>>

----- Original Message -----
From: "go0ogl3" <***@gmail.com>
To: "LVM general discussion and development" <linux-***@redhat.com>
Sent: Monday, February 28, 2005 6:15 AM
Subject: Re: [linux-lvm] Having duplicate PV problems,think there's a bug in
LVM2 md component detection
...
Luca Berra
20 years ago
Permalink
Post by Ron Watkins
Yes, I have two PVs with the same UUID. The problem is that these PVs are
COMPONENTS OF an MD device. So when pvcreate writes its superblock to md0,
it gets mirrored onto all the components of md0. Later runs elicit much
complaining because /dev/hda and several /dev/sd devices now have the
'same' UUID. They're SUPPOSED to, they're components of a RAID.
LVM is supposed to detect this situation and not do that, but it doesn't
seem to be working for me.
i wonder if it is a problem with largefile access ?!?
which version of lvm2 are you using?
does debian patch it?
can you try stracing it an look for seeks, so we can see if it is
wrapping around somewhere?

L.
--
Luca Berra -- ***@comedia.it
Communication Media & Services S.r.l.
/"\
\ / ASCII RIBBON CAMPAIGN
X AGAINST HTML MAIL
/ \
Ron Watkins
20 years ago
Permalink
Due to another failing hard drive, I was forced to just use the md0 device
directly and put it into use. Once I have the dying drive replaced, I might
be able to clear the data off again if you want to do a specific set of
tests, but that would take many hours. (150+ gigs over Fast Ethernet, not a
quick process). So please be sure you need me to do this, if you do.

I am using LVM version 2.01.04 (2005-02-09). The Library Version is
1.01.00-ioctl (2005-01-17). The Driver Version is 4.1.0. (no date given).
I don't have any easy way to determine if the Debian maintainer is patching
this. Should I be submitting the bug report directly to him/her/them
instead?

I'm not sure how to strace... I see the strace command. I assume I'd have to
strace the pvcreate and pvdisplay, meaning I'd need to move my data again
first. If you do want me to do this, I can start copying data off late
tomorrow, and can probably do the actual test on Wednesday. I'll need the
exact command line you want used, however, since I don't know anything about
strace.

<<RON>>

jeeves:~# lvm version
LVM version: 2.01.04 (2005-02-09)
Library version: 1.01.00-ioctl (2005-01-17)
Driver version: 4.1.0


----- Original Message -----
From: "Luca Berra" <***@comedia.it>
To: "LVM general discussion and development" <linux-***@redhat.com>
Sent: Monday, February 28, 2005 4:38 PM
Subject: Re: [linux-lvm] Having duplicate PV problems,think there's a bug in
LVM2 md component detection
...
Luca Berra
20 years ago
Permalink
Post by Ron Watkins
I am using LVM version 2.01.04 (2005-02-09). The Library Version is
1.01.00-ioctl (2005-01-17). The Driver Version is 4.1.0. (no date given).
I don't have any easy way to determine if the Debian maintainer is patching
http://packages.debian.org/lvm2
Post by Ron Watkins
this. Should I be submitting the bug report directly to him/her/them
instead?
well, they do explicitely add -D_FILE_OFFSET_BITS=64 to CFLAGS
i wonder if this changes the lseek behaviour
i'd try asking the debian maintainer first....

L.
--
Luca Berra -- ***@comedia.it
Communication Media & Services S.r.l.
/"\
\ / ASCII RIBBON CAMPAIGN
X AGAINST HTML MAIL
/ \
Alasdair G Kergon
20 years ago
Permalink
Post by Ron Watkins
I'm running Debian Unstable, using kernel 2.6.8-2-k7. I HAVE set
md_component_detection to 1 in lvm.conf, and I wiped the drives after
changing this setting.
Found duplicate PV y8pYTtAg0W703Sc8Wiy79mcWU3gHmCFc: using /dev/sde not
/dev/hda
Found duplicate PV y8pYTtAg0W703Sc8Wiy79mcWU3gHmCFc: using /dev/sde not
/dev/hda
It seems to think that /dev/hda is where the PV is, rather than /dev/md0.
(Note, again, I *HAVE* turned the md_component_detection to 1 in lvm.conf!!)
Add verbose flags:
vgscan -vvvv
and post a URL to the output.
Add in your configuration: lvm.conf, pvs -av plus full details of your md
configuration (eg various mdadm options).

Alasdair
--
***@redhat.com
Matthias Julius
20 years ago
Permalink
...
I just had the same problem. As a workaround I set up a filter in
lvm.conf in devices{} like:

filter = [ "a|^/dev/md.*|", "r/.*/" ]

And that works fine.

Matthias

Loading...