Discussion:
[linux-lvm] Corrupt PV (wrong size)
Richard Petty
2012-03-05 18:46:15 UTC
Permalink
GOAL: Retrieve a KVM virtual machine from an inaccessible LVM volume.

DESCRIPTION: In November, I was working on a home server. The system
boots to software mirrored drives but I have a hardware-based RAID5
array on it and I decided to create a logical volume and mount it at
/var/lib/libvirt/images so that all my KVM virtual machine image
files would reside on the hardware RAID.

All that worked fine. Later, I decided to expand that
logical volume and that's when I made a mistake which wasn't
discovered until about six weeks later when I accidentally rebooted
the server. (Good problems usually require several mistakes.)

Somehow, I accidentally mis-specified the second LMV physical
volume that I added to the volume group. When trying to activate
the LV filesystem, the device mapper now complains:

LOG ENTRY
table: 253:3: sdc2 too small for target: start=2048, len=1048584192, dev_size=1048577586

As you can see, the length is greater than the device size.

I do not know how this could have happened. I assumed that LVM tool
sanity checking would have prevented this from happening.

PV0 is okay.
PV1 is defective.
PV2 is okay but too small to receive a PV1's contents, I think.
PV3 was just added, hoping to migrate PV1 contents to it.

So I added PV3 and tried to do a move but it seems that using some
of the LMV tools is predicated on the kernel being able to activate
everything, which it refuses to do.

Can't migrate the data, can't resize anything. I'm stuck. If course
I've done a lot of Google research over the months but I have yet to
see a problem such as this solved.

Got ideas?

Again, my goal is to pluck a copy of a 100GB virtual machine off of
the LV. After that, I'll delete the LV.

==========================

LMV REPORT FROM /etc/lvm/archive BEFORE THE CORRUPTION

vg_raid {
id = "JLeyHJ-saON-6NSF-4Hqc-1rTA-vOWE-CU5aDZ"
seqno = 2
status = ["RESIZEABLE", "READ", "WRITE"]
flags = []
extent_size = 8192 # 4 Megabytes
max_lv = 0
max_pv = 0
metadata_copies = 0

physical_volumes {

pv0 {
id = "QaF9P6-Q9ch-bFTa-O3z2-3Idi-SdIw-YMLkQI"
device = "/dev/sdc1" # Hint only

status = ["ALLOCATABLE"]
flags = []
dev_size = 419430400 # 200 Gigabytes
pe_start = 2048
pe_count = 51199 # 199.996 Gigabytes
}
}

logical_volumes {

kvmfs {
id = "Hs636n-PLcl-aivI-VbTe-CAls-Zul8-m2liRY"
status = ["READ", "WRITE", "VISIBLE"]
flags = []
segment_count = 1

segment1 {
start_extent = 0
extent_count = 50944 # 199 Gigabytes

type = "striped"
stripe_count = 1 # linear

stripes = [
"pv0", 0
]
}
}
}
}

==========================

LMV REPORT FROM /etc/lvm/archive AS SEEN TODAY

vg_raid {
id = "JLeyHJ-saON-6NSF-4Hqc-1rTA-vOWE-CU5aDZ"
seqno = 13
status = ["RESIZEABLE", "READ", "WRITE"]
flags = []
extent_size = 8192 # 4 Megabytes
max_lv = 0
max_pv = 0
metadata_copies = 0

physical_volumes {

pv0 {
id = "QaF9P6-Q9ch-bFTa-O3z2-3Idi-SdIw-YMLkQI"
device = "/dev/sdc1" # Hint only

status = ["ALLOCATABLE"]
flags = []
dev_size = 419430400 # 200 Gigabytes
pe_start = 2048
pe_count = 51199 # 199.996 Gigabytes
}

pv1 {
id = "8o0Igh-DKC8-gsof-FuZX-2Irn-qekz-0Y2mM9"
device = "/dev/sdc2" # Hint only

status = ["ALLOCATABLE"]
flags = []
dev_size = 2507662218 # 1.16772 Terabytes
pe_start = 2048
pe_count = 306110 # 1.16772 Terabytes
}

pv2 {
id = "NuW7Bi-598r-cnLV-E1E8-Srjw-4oM4-77RJkU"
device = "/dev/sdb5" # Hint only

status = ["ALLOCATABLE"]
flags = []
dev_size = 859573827 # 409.877 Gigabytes
pe_start = 2048
pe_count = 104928 # 409.875 Gigabytes
}

pv3 {
id = "eL40Za-g3aS-92Uc-E0fT-mHrP-5rO6-HT7pKK"
device = "/dev/sdc3" # Hint only

status = ["ALLOCATABLE"]
flags = []
dev_size = 1459084632 # 695.746 Gigabytes
pe_start = 2048
pe_count = 178110 # 695.742 Gigabytes
}
}

logical_volumes {

kvmfs {
id = "Hs636n-PLcl-aivI-VbTe-CAls-Zul8-m2liRY"
status = ["READ", "WRITE", "VISIBLE"]
flags = []
segment_count = 2

segment1 {
start_extent = 0
extent_count = 51199 # 199.996 Gigabytes

type = "striped"
stripe_count = 1 # linear

stripes = [
"pv0", 0
]
}
segment2 {
start_extent = 51199
extent_count = 128001 # 500.004 Gigabytes

type = "striped"
stripe_count = 1 # linear

stripes = [
"pv1", 0
]
}
}
}
}

==========================

I do have a intermediate versions of the /etc/lvm/archive files
produced as I tinkered, in case they might be useful.
Stuart D. Gathman
2012-03-05 22:31:49 UTC
Permalink
Post by Richard Petty
GOAL: Retrieve a KVM virtual machine from an inaccessible LVM volume.
DESCRIPTION: In November, I was working on a home server. The system
boots to software mirrored drives but I have a hardware-based RAID5
array on it and I decided to create a logical volume and mount it at
/var/lib/libvirt/images so that all my KVM virtual machine image
files would reside on the hardware RAID.
All that worked fine. Later, I decided to expand that
logical volume and that's when I made a mistake which wasn't
discovered until about six weeks later when I accidentally rebooted
the server. (Good problems usually require several mistakes.)
Somehow, I accidentally mis-specified the second LMV physical
volume that I added to the volume group. When trying to activate
LOG ENTRY
table: 253:3: sdc2 too small for target: start=2048, len=1048584192, dev_size=1048577586
As you can see, the length is greater than the device size.
I've run into something like this. The issue was that the device was
reporting the incorrect size. It turned out to be buggy firmware in the
SATA/USB adapter. Using another adapter or connecting the drive directly
to SATA made the problem go away.

You didn't mention the crucial details of which PV was on which kind of
device.

You could try pvresize on sdc2, which could succeed if it won't invalidate
any extents. The size difference is small.

You might have changed the partition table on sdc, and the change would
be written to disk (with a warning) but wouldn't be seen until you rebooted.

As long as the origin didn't change, pvresize will fix it, at most losing
one extent at the end of sdc2. (Size difference is 6606 sectors, ~3M.)

--
Stuart D. Gathman <***@bmsi.com>
Business Management Systems Inc. Phone: 703 591-0911 Fax: 703 591-6154
"Confutatis maledictis, flammis acribus addictis" - background song for
a Microsoft sponsored "Where do you want to go from here?" commercial.
Richard Petty
2012-03-06 21:20:09 UTC
Permalink
Post by Stuart D. Gathman
Post by Richard Petty
GOAL: Retrieve a KVM virtual machine from an inaccessible LVM volume.
DESCRIPTION: In November, I was working on a home server. The system
boots to software mirrored drives but I have a hardware-based RAID5
array on it and I decided to create a logical volume and mount it at
/var/lib/libvirt/images so that all my KVM virtual machine image
files would reside on the hardware RAID.
All that worked fine. Later, I decided to expand that
logical volume and that's when I made a mistake which wasn't
discovered until about six weeks later when I accidentally rebooted
the server. (Good problems usually require several mistakes.)
Somehow, I accidentally mis-specified the second LMV physical
volume that I added to the volume group. When trying to activate
LOG ENTRY
table: 253:3: sdc2 too small for target: start=2048, len=1048584192, dev_size=1048577586
As you can see, the length is greater than the device size.
I've run into something like this. The issue was that the device was
reporting the incorrect size. It turned out to be buggy firmware in the SATA/USB adapter. Using another adapter or connecting the drive directly
to SATA made the problem go away.
You didn't mention the crucial details of which PV was on which kind of
device.
You could try pvresize on sdc2, which could succeed if it won't invalidate
any extents. The size difference is small.
You might have changed the partition table on sdc, and the change would
be written to disk (with a warning) but wouldn't be seen until you rebooted.
As long as the origin didn't change, pvresize will fix it, at most losing
one extent at the end of sdc2. (Size difference is 6606 sectors, ~3M.)
The system is a Dell PowerEdge 1900 and the PV is on a PERC 5 RAID controller. Other PVs on the same disk array appear to function fine.

Here I issued the pvresize command:

[***@zeus] pvresize /dev/sdc2
/dev/sdc2: cannot resize to 127999 extents as 128001 are allocated.
0 physical volume(s) resized / 1 physical volume(s) not resized

No bueno.

--RP
Lars Ellenberg
2012-03-07 20:31:53 UTC
Permalink
Post by Richard Petty
GOAL: Retrieve a KVM virtual machine from an inaccessible LVM volume.
DESCRIPTION: In November, I was working on a home server. The system
boots to software mirrored drives but I have a hardware-based RAID5
array on it and I decided to create a logical volume and mount it at
/var/lib/libvirt/images so that all my KVM virtual machine image
files would reside on the hardware RAID.
All that worked fine. Later, I decided to expand that
logical volume and that's when I made a mistake which wasn't
discovered until about six weeks later when I accidentally rebooted
the server. (Good problems usually require several mistakes.)
Somehow, I accidentally mis-specified the second LMV physical
volume that I added to the volume group. When trying to activate
LOG ENTRY
table: 253:3: sdc2 too small for target: start=2048, len=1048584192, dev_size=1048577586
As you can see, the length is greater than the device size.
I do not know how this could have happened. I assumed that LVM tool
sanity checking would have prevented this from happening.
PV0 is okay.
PV1 is defective.
PV2 is okay but too small to receive a PV1's contents, I think.
PV3 was just added, hoping to migrate PV1 contents to it.
So I added PV3 and tried to do a move but it seems that using some
of the LMV tools is predicated on the kernel being able to activate
everything, which it refuses to do.
Can't migrate the data, can't resize anything. I'm stuck. If course
I've done a lot of Google research over the months but I have yet to
see a problem such as this solved.
Got ideas?
Again, my goal is to pluck a copy of a 100GB virtual machine off of
the LV. After that, I'll delete the LV.
==========================
LMV REPORT FROM /etc/lvm/archive BEFORE THE CORRUPTION
vg_raid {
id = "JLeyHJ-saON-6NSF-4Hqc-1rTA-vOWE-CU5aDZ"
seqno = 2
status = ["RESIZEABLE", "READ", "WRITE"]
flags = []
extent_size = 8192 # 4 Megabytes
max_lv = 0
max_pv = 0
metadata_copies = 0
physical_volumes {
pv0 {
id = "QaF9P6-Q9ch-bFTa-O3z2-3Idi-SdIw-YMLkQI"
device = "/dev/sdc1" # Hint only
status = ["ALLOCATABLE"]
flags = []
dev_size = 419430400 # 200 Gigabytes
pe_start = 2048
that's number of sectors into /dev/sdc1 "Hint only"
Post by Richard Petty
pe_count = 51199 # 199.996 Gigabytes
}
}
logical_volumes {
kvmfs {
id = "Hs636n-PLcl-aivI-VbTe-CAls-Zul8-m2liRY"
status = ["READ", "WRITE", "VISIBLE"]
flags = []
segment_count = 1
segment1 {
start_extent = 0
extent_count = 50944 # 199 Gigabytes
And that tells us your kvmfs lv is
linear, not fragmented, and starting at extent 0.
Which is, as seen above, 2048 sectors into sdc1.

Try this, then look at /dev/mapper/maybe_kvmfs
echo "0 $[50944 * 8192] linear /dev/sdc1 2048" |
dmsetup create maybe_kvmfs

But see below...
Post by Richard Petty
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv0", 0
]
}
}
}
}
==========================
LMV REPORT FROM /etc/lvm/archive AS SEEN TODAY
vg_raid {
id = "JLeyHJ-saON-6NSF-4Hqc-1rTA-vOWE-CU5aDZ"
seqno = 13
status = ["RESIZEABLE", "READ", "WRITE"]
flags = []
extent_size = 8192 # 4 Megabytes
max_lv = 0
max_pv = 0
metadata_copies = 0
physical_volumes {
pv0 {
id = "QaF9P6-Q9ch-bFTa-O3z2-3Idi-SdIw-YMLkQI"
device = "/dev/sdc1" # Hint only
status = ["ALLOCATABLE"]
flags = []
dev_size = 419430400 # 200 Gigabytes
pe_start = 2048
pe_count = 51199 # 199.996 Gigabytes
}
pv1 {
id = "8o0Igh-DKC8-gsof-FuZX-2Irn-qekz-0Y2mM9"
device = "/dev/sdc2" # Hint only
status = ["ALLOCATABLE"]
flags = []
dev_size = 2507662218 # 1.16772 Terabytes
pe_start = 2048
pe_count = 306110 # 1.16772 Terabytes
}
pv2 {
id = "NuW7Bi-598r-cnLV-E1E8-Srjw-4oM4-77RJkU"
device = "/dev/sdb5" # Hint only
status = ["ALLOCATABLE"]
flags = []
dev_size = 859573827 # 409.877 Gigabytes
pe_start = 2048
pe_count = 104928 # 409.875 Gigabytes
}
pv3 {
id = "eL40Za-g3aS-92Uc-E0fT-mHrP-5rO6-HT7pKK"
device = "/dev/sdc3" # Hint only
status = ["ALLOCATABLE"]
flags = []
dev_size = 1459084632 # 695.746 Gigabytes
pe_start = 2048
pe_count = 178110 # 695.742 Gigabytes
}
}
logical_volumes {
kvmfs {
id = "Hs636n-PLcl-aivI-VbTe-CAls-Zul8-m2liRY"
status = ["READ", "WRITE", "VISIBLE"]
flags = []
segment_count = 2
Oops, why does it have two segments now?
That must have been your resize attempt.
Post by Richard Petty
segment1 {
start_extent = 0
extent_count = 51199 # 199.996 Gigabytes
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv0", 0
]
}
segment2 {
start_extent = 51199
extent_count = 128001 # 500.004 Gigabytes
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv1", 0
Fortunately simple again: two segments,
both starting at extent 0 of their respective pv.
that gives us:

echo "0 $[51199 * 8192] linear /dev/sdc1 2048
$[51199 * 8192] $[128001 * 8192] linear /dev/sdc2 2048" |
dmsetup create maybe_kvmfs

(now do some read-only sanity checks...)

Of course you need to adjust sdc1 and sdc2 to
whatever is "right".

According to the meta data dump above,
"sdc1" is supposed to be your old 200 GB PV,
and "sdc2" the 1.6 TB partition.

The other PVs are "sdb5" (410 GB),
and a "sdc3" of 695 GB...

If 128001 is too large, reduce until it fits.
If you broke the partition table,
and the partition offsets are now wrong,
you have to experiment a lot,
and hope for the best.

That will truncate the "kvmfs",
but should not cause too much loss.

If you figured out the correct PVs and offsets,
you should be able to recover it all.

Hope that helps you find your data.

Lars
Post by Richard Petty
]
}
}
}
}
==========================
I do have a intermediate versions of the /etc/lvm/archive files
produced as I tinkered, in case they might be useful.
--
: Lars Ellenberg
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com
Richard Petty
2012-03-19 20:57:42 UTC
Permalink
Sorry for the long break away from this topic....
Post by Lars Ellenberg
Post by Richard Petty
GOAL: Retrieve a KVM virtual machine from an inaccessible LVM volume.
DESCRIPTION: In November, I was working on a home server. The system
boots to software mirrored drives but I have a hardware-based RAID5
array on it and I decided to create a logical volume and mount it at
/var/lib/libvirt/images so that all my KVM virtual machine image
files would reside on the hardware RAID.
All that worked fine. Later, I decided to expand that
logical volume and that's when I made a mistake which wasn't
discovered until about six weeks later when I accidentally rebooted
the server. (Good problems usually require several mistakes.)
Somehow, I accidentally mis-specified the second LMV physical
volume that I added to the volume group. When trying to activate
LOG ENTRY
table: 253:3: sdc2 too small for target: start=2048, len=1048584192, dev_size=1048577586
As you can see, the length is greater than the device size.
I do not know how this could have happened. I assumed that LVM tool
sanity checking would have prevented this from happening.
PV0 is okay.
PV1 is defective.
PV2 is okay but too small to receive a PV1's contents, I think.
PV3 was just added, hoping to migrate PV1 contents to it.
So I added PV3 and tried to do a move but it seems that using some
of the LMV tools is predicated on the kernel being able to activate
everything, which it refuses to do.
Can't migrate the data, can't resize anything. I'm stuck. If course
I've done a lot of Google research over the months but I have yet to
see a problem such as this solved.
Got ideas?
Again, my goal is to pluck a copy of a 100GB virtual machine off of
the LV. After that, I'll delete the LV.
==========================
LMV REPORT FROM /etc/lvm/archive BEFORE THE CORRUPTION
vg_raid {
id = "JLeyHJ-saON-6NSF-4Hqc-1rTA-vOWE-CU5aDZ"
seqno = 2
status = ["RESIZEABLE", "READ", "WRITE"]
flags = []
extent_size = 8192 # 4 Megabytes
max_lv = 0
max_pv = 0
metadata_copies = 0
physical_volumes {
pv0 {
id = "QaF9P6-Q9ch-bFTa-O3z2-3Idi-SdIw-YMLkQI"
device = "/dev/sdc1" # Hint only
status = ["ALLOCATABLE"]
flags = []
dev_size = 419430400 # 200 Gigabytes
pe_start = 2048
that's number of sectors into /dev/sdc1 "Hint only"
Post by Richard Petty
pe_count = 51199 # 199.996 Gigabytes
}
}
logical_volumes {
kvmfs {
id = "Hs636n-PLcl-aivI-VbTe-CAls-Zul8-m2liRY"
status = ["READ", "WRITE", "VISIBLE"]
flags = []
segment_count = 1
segment1 {
start_extent = 0
extent_count = 50944 # 199 Gigabytes
And that tells us your kvmfs lv is
linear, not fragmented, and starting at extent 0.
Which is, as seen above, 2048 sectors into sdc1.
Try this, then look at /dev/mapper/maybe_kvmfs
echo "0 $[50944 * 8192] linear /dev/sdc1 2048" |
dmsetup create maybe_kvmfs
This did result in creating an entry at /dev/mapper/maybe_kvmfs.
Post by Lars Ellenberg
But see below...
Post by Richard Petty
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv0", 0
]
}
}
}
}
==========================
LMV REPORT FROM /etc/lvm/archive AS SEEN TODAY
vg_raid {
id = "JLeyHJ-saON-6NSF-4Hqc-1rTA-vOWE-CU5aDZ"
seqno = 13
status = ["RESIZEABLE", "READ", "WRITE"]
flags = []
extent_size = 8192 # 4 Megabytes
max_lv = 0
max_pv = 0
metadata_copies = 0
physical_volumes {
pv0 {
id = "QaF9P6-Q9ch-bFTa-O3z2-3Idi-SdIw-YMLkQI"
device = "/dev/sdc1" # Hint only
status = ["ALLOCATABLE"]
flags = []
dev_size = 419430400 # 200 Gigabytes
pe_start = 2048
pe_count = 51199 # 199.996 Gigabytes
}
pv1 {
id = "8o0Igh-DKC8-gsof-FuZX-2Irn-qekz-0Y2mM9"
device = "/dev/sdc2" # Hint only
status = ["ALLOCATABLE"]
flags = []
dev_size = 2507662218 # 1.16772 Terabytes
pe_start = 2048
pe_count = 306110 # 1.16772 Terabytes
}
pv2 {
id = "NuW7Bi-598r-cnLV-E1E8-Srjw-4oM4-77RJkU"
device = "/dev/sdb5" # Hint only
status = ["ALLOCATABLE"]
flags = []
dev_size = 859573827 # 409.877 Gigabytes
pe_start = 2048
pe_count = 104928 # 409.875 Gigabytes
}
pv3 {
id = "eL40Za-g3aS-92Uc-E0fT-mHrP-5rO6-HT7pKK"
device = "/dev/sdc3" # Hint only
status = ["ALLOCATABLE"]
flags = []
dev_size = 1459084632 # 695.746 Gigabytes
pe_start = 2048
pe_count = 178110 # 695.742 Gigabytes
}
}
logical_volumes {
kvmfs {
id = "Hs636n-PLcl-aivI-VbTe-CAls-Zul8-m2liRY"
status = ["READ", "WRITE", "VISIBLE"]
flags = []
segment_count = 2
Oops, why does it have two segments now?
That must have been your resize attempt.
Post by Richard Petty
segment1 {
start_extent = 0
extent_count = 51199 # 199.996 Gigabytes
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv0", 0
]
}
segment2 {
start_extent = 51199
extent_count = 128001 # 500.004 Gigabytes
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv1", 0
Fortunately simple again: two segments,
both starting at extent 0 of their respective pv.
echo "0 $[51199 * 8192] linear /dev/sdc1 2048
$[51199 * 8192] $[128001 * 8192] linear /dev/sdc2 2048" |
dmsetup create maybe_kvmfs
(now do some read-only sanity checks...)
I tried this command, decrementing sdc2 from 128001 to 127999:

[***@zeus /dev/mapper] echo "0 $[51199 * 8192] linear /dev/sdc1 2048 $[51199 * 8192] $[127999 * 8192] linear /dev/sdc2 2048" | dmsetup create kvmfs
device-mapper: create ioctl failed: Device or resource busy
Command failed
Post by Lars Ellenberg
Of course you need to adjust sdc1 and sdc2 to
whatever is "right".
According to the meta data dump above,
"sdc1" is supposed to be your old 200 GB PV,
and "sdc2" the 1.6 TB partition.
The other PVs are "sdb5" (410 GB),
and a "sdc3" of 695 GB...
If 128001 is too large, reduce until it fits.
If you broke the partition table,
and the partition offsets are now wrong,
you have to experiment a lot,
and hope for the best.
That will truncate the "kvmfs",
but should not cause too much loss.
If you figured out the correct PVs and offsets,
you should be able to recover it all.
I understand that the strategy is to reduce the declared size of PV1 so that LVM can enable the PV and I can mount the kvmfs LV. I'm not expert at LVM, and while I can get some things done with it when there are no problems, I'm out of my league when problems occur.
Lars Ellenberg
2012-03-20 20:32:19 UTC
Permalink
Post by Richard Petty
Sorry for the long break away from this topic....
Post by Lars Ellenberg
Post by Richard Petty
GOAL: Retrieve a KVM virtual machine from an inaccessible LVM volume.
DESCRIPTION: In November, I was working on a home server. The system
boots to software mirrored drives but I have a hardware-based RAID5
array on it and I decided to create a logical volume and mount it at
/var/lib/libvirt/images so that all my KVM virtual machine image
files would reside on the hardware RAID.
All that worked fine. Later, I decided to expand that
logical volume and that's when I made a mistake which wasn't
discovered until about six weeks later when I accidentally rebooted
the server. (Good problems usually require several mistakes.)
Somehow, I accidentally mis-specified the second LMV physical
volume that I added to the volume group. When trying to activate
LOG ENTRY
table: 253:3: sdc2 too small for target: start=2048, len=1048584192, dev_size=1048577586
As you can see, the length is greater than the device size.
I do not know how this could have happened. I assumed that LVM tool
sanity checking would have prevented this from happening.
PV0 is okay.
PV1 is defective.
PV2 is okay but too small to receive a PV1's contents, I think.
PV3 was just added, hoping to migrate PV1 contents to it.
So I added PV3 and tried to do a move but it seems that using some
of the LMV tools is predicated on the kernel being able to activate
everything, which it refuses to do.
Can't migrate the data, can't resize anything. I'm stuck. If course
I've done a lot of Google research over the months but I have yet to
see a problem such as this solved.
Got ideas?
Again, my goal is to pluck a copy of a 100GB virtual machine off of
the LV. After that, I'll delete the LV.
==========================
LMV REPORT FROM /etc/lvm/archive BEFORE THE CORRUPTION
vg_raid {
id = "JLeyHJ-saON-6NSF-4Hqc-1rTA-vOWE-CU5aDZ"
seqno = 2
status = ["RESIZEABLE", "READ", "WRITE"]
flags = []
extent_size = 8192 # 4 Megabytes
max_lv = 0
max_pv = 0
metadata_copies = 0
physical_volumes {
pv0 {
id = "QaF9P6-Q9ch-bFTa-O3z2-3Idi-SdIw-YMLkQI"
device = "/dev/sdc1" # Hint only
status = ["ALLOCATABLE"]
flags = []
dev_size = 419430400 # 200 Gigabytes
pe_start = 2048
that's number of sectors into /dev/sdc1 "Hint only"
Post by Richard Petty
pe_count = 51199 # 199.996 Gigabytes
}
}
logical_volumes {
kvmfs {
id = "Hs636n-PLcl-aivI-VbTe-CAls-Zul8-m2liRY"
status = ["READ", "WRITE", "VISIBLE"]
flags = []
segment_count = 1
segment1 {
start_extent = 0
extent_count = 50944 # 199 Gigabytes
And that tells us your kvmfs lv is
linear, not fragmented, and starting at extent 0.
Which is, as seen above, 2048 sectors into sdc1.
Try this, then look at /dev/mapper/maybe_kvmfs
echo "0 $[50944 * 8192] linear /dev/sdc1 2048" |
dmsetup create maybe_kvmfs
This did result in creating an entry at /dev/mapper/maybe_kvmfs.
Post by Lars Ellenberg
But see below...
Post by Richard Petty
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv0", 0
]
}
}
}
}
==========================
LMV REPORT FROM /etc/lvm/archive AS SEEN TODAY
vg_raid {
id = "JLeyHJ-saON-6NSF-4Hqc-1rTA-vOWE-CU5aDZ"
seqno = 13
status = ["RESIZEABLE", "READ", "WRITE"]
flags = []
extent_size = 8192 # 4 Megabytes
max_lv = 0
max_pv = 0
metadata_copies = 0
physical_volumes {
pv0 {
id = "QaF9P6-Q9ch-bFTa-O3z2-3Idi-SdIw-YMLkQI"
device = "/dev/sdc1" # Hint only
status = ["ALLOCATABLE"]
flags = []
dev_size = 419430400 # 200 Gigabytes
pe_start = 2048
pe_count = 51199 # 199.996 Gigabytes
}
pv1 {
id = "8o0Igh-DKC8-gsof-FuZX-2Irn-qekz-0Y2mM9"
device = "/dev/sdc2" # Hint only
status = ["ALLOCATABLE"]
flags = []
dev_size = 2507662218 # 1.16772 Terabytes
pe_start = 2048
pe_count = 306110 # 1.16772 Terabytes
}
pv2 {
id = "NuW7Bi-598r-cnLV-E1E8-Srjw-4oM4-77RJkU"
device = "/dev/sdb5" # Hint only
status = ["ALLOCATABLE"]
flags = []
dev_size = 859573827 # 409.877 Gigabytes
pe_start = 2048
pe_count = 104928 # 409.875 Gigabytes
}
pv3 {
id = "eL40Za-g3aS-92Uc-E0fT-mHrP-5rO6-HT7pKK"
device = "/dev/sdc3" # Hint only
status = ["ALLOCATABLE"]
flags = []
dev_size = 1459084632 # 695.746 Gigabytes
pe_start = 2048
pe_count = 178110 # 695.742 Gigabytes
}
}
logical_volumes {
kvmfs {
id = "Hs636n-PLcl-aivI-VbTe-CAls-Zul8-m2liRY"
status = ["READ", "WRITE", "VISIBLE"]
flags = []
segment_count = 2
Oops, why does it have two segments now?
That must have been your resize attempt.
Post by Richard Petty
segment1 {
start_extent = 0
extent_count = 51199 # 199.996 Gigabytes
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv0", 0
]
}
segment2 {
start_extent = 51199
extent_count = 128001 # 500.004 Gigabytes
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv1", 0
Fortunately simple again: two segments,
both starting at extent 0 of their respective pv.
echo "0 $[51199 * 8192] linear /dev/sdc1 2048
$[51199 * 8192] $[128001 * 8192] linear /dev/sdc2 2048" |
dmsetup create maybe_kvmfs
(now do some read-only sanity checks...)
device-mapper: create ioctl failed: Device or resource busy
Command failed
Well: you need to find out what to use as /dev/sdXY there, first,
you need to match your disks/partitions to the pvs.
Post by Richard Petty
Post by Lars Ellenberg
Of course you need to adjust sdc1 and sdc2 to
whatever is "right".
According to the meta data dump above,
"sdc1" is supposed to be your old 200 GB PV,
and "sdc2" the 1.6 TB partition.
The other PVs are "sdb5" (410 GB),
and a "sdc3" of 695 GB...
If "matching by size" did not work for you,
maybe "pvs -o +pv_uuid" gives sufficient clues
to be able to match them with the lvm meta data dump
above, and construct a working dmsetup line.
Post by Richard Petty
Post by Lars Ellenberg
If 128001 is too large, reduce until it fits.
If you broke the partition table,
and the partition offsets are now wrong,
you have to experiment a lot,
and hope for the best.
That will truncate the "kvmfs",
but should not cause too much loss.
If you figured out the correct PVs and offsets,
you should be able to recover it all.
I understand that the strategy is to reduce the declared size of PV1
so that LVM can enable the PV and I can mount the kvmfs LV. I'm not
expert at LVM, and while I can get some things done with it when there
are no problems, I'm out of my league when problems occur.
--
: Lars Ellenberg
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com

DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.
Richard Petty
2012-06-27 19:57:13 UTC
Permalink
Disk /dev/sdc: 1498.7 GB, 1498675150848 bytes
118 heads, 57 sectors/track, 435191 cylinders
Units = cylinders of 6726 * 512 = 3443712 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000df573
Device Boot Start End Blocks Id System
/dev/sdc1 1 62360 209715200 8e Linux LVM
/dev/sdc2 62360 218259 524288793 8e Linux LVM
/dev/sdc3 218260 435191 729542316 8e Linux LVM
The last block of sdc1 is 62360 and the first block of sdc2 is 62360... the same block.

I don't know how fdisk permitted the creation of sdc2 to start on a block that was already in use.

I'm pretty sure that the virtual disk file, at least 100GB in size, spanned all of sdc1 and at least some of sdc2 and that it operated without any trouble for a month or two. It was only on a reboot that LVM wouldn't mount /dev/mapper/vg_zeus-vg_raid.

--Richard
Post by Richard Petty
Sorry for the long break away from this topic....
Post by Lars Ellenberg
Post by Richard Petty
GOAL: Retrieve a KVM virtual machine from an inaccessible LVM volume.
DESCRIPTION: In November, I was working on a home server. The system
boots to software mirrored drives but I have a hardware-based RAID5
array on it and I decided to create a logical volume and mount it at
/var/lib/libvirt/images so that all my KVM virtual machine image
files would reside on the hardware RAID.
All that worked fine. Later, I decided to expand that
logical volume and that's when I made a mistake which wasn't
discovered until about six weeks later when I accidentally rebooted
the server. (Good problems usually require several mistakes.)
Somehow, I accidentally mis-specified the second LMV physical
volume that I added to the volume group. When trying to activate
LOG ENTRY
table: 253:3: sdc2 too small for target: start=2048, len=1048584192, dev_size=1048577586
As you can see, the length is greater than the device size.
I do not know how this could have happened. I assumed that LVM tool
sanity checking would have prevented this from happening.
PV0 is okay.
PV1 is defective.
PV2 is okay but too small to receive a PV1's contents, I think.
PV3 was just added, hoping to migrate PV1 contents to it.
So I added PV3 and tried to do a move but it seems that using some
of the LMV tools is predicated on the kernel being able to activate
everything, which it refuses to do.
Can't migrate the data, can't resize anything. I'm stuck. If course
I've done a lot of Google research over the months but I have yet to
see a problem such as this solved.
Got ideas?
Again, my goal is to pluck a copy of a 100GB virtual machine off of
the LV. After that, I'll delete the LV.
==========================
LMV REPORT FROM /etc/lvm/archive BEFORE THE CORRUPTION
vg_raid {
id = "JLeyHJ-saON-6NSF-4Hqc-1rTA-vOWE-CU5aDZ"
seqno = 2
status = ["RESIZEABLE", "READ", "WRITE"]
flags = []
extent_size = 8192 # 4 Megabytes
max_lv = 0
max_pv = 0
metadata_copies = 0
physical_volumes {
pv0 {
id = "QaF9P6-Q9ch-bFTa-O3z2-3Idi-SdIw-YMLkQI"
device = "/dev/sdc1" # Hint only
status = ["ALLOCATABLE"]
flags = []
dev_size = 419430400 # 200 Gigabytes
pe_start = 2048
that's number of sectors into /dev/sdc1 "Hint only"
Post by Richard Petty
pe_count = 51199 # 199.996 Gigabytes
}
}
logical_volumes {
kvmfs {
id = "Hs636n-PLcl-aivI-VbTe-CAls-Zul8-m2liRY"
status = ["READ", "WRITE", "VISIBLE"]
flags = []
segment_count = 1
segment1 {
start_extent = 0
extent_count = 50944 # 199 Gigabytes
And that tells us your kvmfs lv is
linear, not fragmented, and starting at extent 0.
Which is, as seen above, 2048 sectors into sdc1.
Try this, then look at /dev/mapper/maybe_kvmfs
echo "0 $[50944 * 8192] linear /dev/sdc1 2048" |
dmsetup create maybe_kvmfs
This did result in creating an entry at /dev/mapper/maybe_kvmfs.
Post by Lars Ellenberg
But see below...
Post by Richard Petty
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv0", 0
]
}
}
}
}
==========================
LMV REPORT FROM /etc/lvm/archive AS SEEN TODAY
vg_raid {
id = "JLeyHJ-saON-6NSF-4Hqc-1rTA-vOWE-CU5aDZ"
seqno = 13
status = ["RESIZEABLE", "READ", "WRITE"]
flags = []
extent_size = 8192 # 4 Megabytes
max_lv = 0
max_pv = 0
metadata_copies = 0
physical_volumes {
pv0 {
id = "QaF9P6-Q9ch-bFTa-O3z2-3Idi-SdIw-YMLkQI"
device = "/dev/sdc1" # Hint only
status = ["ALLOCATABLE"]
flags = []
dev_size = 419430400 # 200 Gigabytes
pe_start = 2048
pe_count = 51199 # 199.996 Gigabytes
}
pv1 {
id = "8o0Igh-DKC8-gsof-FuZX-2Irn-qekz-0Y2mM9"
device = "/dev/sdc2" # Hint only
status = ["ALLOCATABLE"]
flags = []
dev_size = 2507662218 # 1.16772 Terabytes
pe_start = 2048
pe_count = 306110 # 1.16772 Terabytes
}
pv2 {
id = "NuW7Bi-598r-cnLV-E1E8-Srjw-4oM4-77RJkU"
device = "/dev/sdb5" # Hint only
status = ["ALLOCATABLE"]
flags = []
dev_size = 859573827 # 409.877 Gigabytes
pe_start = 2048
pe_count = 104928 # 409.875 Gigabytes
}
pv3 {
id = "eL40Za-g3aS-92Uc-E0fT-mHrP-5rO6-HT7pKK"
device = "/dev/sdc3" # Hint only
status = ["ALLOCATABLE"]
flags = []
dev_size = 1459084632 # 695.746 Gigabytes
pe_start = 2048
pe_count = 178110 # 695.742 Gigabytes
}
}
logical_volumes {
kvmfs {
id = "Hs636n-PLcl-aivI-VbTe-CAls-Zul8-m2liRY"
status = ["READ", "WRITE", "VISIBLE"]
flags = []
segment_count = 2
Oops, why does it have two segments now?
That must have been your resize attempt.
Post by Richard Petty
segment1 {
start_extent = 0
extent_count = 51199 # 199.996 Gigabytes
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv0", 0
]
}
segment2 {
start_extent = 51199
extent_count = 128001 # 500.004 Gigabytes
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv1", 0
Fortunately simple again: two segments,
both starting at extent 0 of their respective pv.
echo "0 $[51199 * 8192] linear /dev/sdc1 2048
$[51199 * 8192] $[128001 * 8192] linear /dev/sdc2 2048" |
dmsetup create maybe_kvmfs
(now do some read-only sanity checks...)
device-mapper: create ioctl failed: Device or resource busy
Command failed
Well: you need to find out what to use as /dev/sdXY there, first,
you need to match your disks/partitions to the pvs.
Post by Richard Petty
Post by Lars Ellenberg
Of course you need to adjust sdc1 and sdc2 to
whatever is "right".
According to the meta data dump above,
"sdc1" is supposed to be your old 200 GB PV,
and "sdc2" the 1.6 TB partition.
The other PVs are "sdb5" (410 GB),
and a "sdc3" of 695 GB...
If "matching by size" did not work for you,
maybe "pvs -o +pv_uuid" gives sufficient clues
to be able to match them with the lvm meta data dump
above, and construct a working dmsetup line.
Post by Richard Petty
Post by Lars Ellenberg
If 128001 is too large, reduce until it fits.
If you broke the partition table,
and the partition offsets are now wrong,
you have to experiment a lot,
and hope for the best.
That will truncate the "kvmfs",
but should not cause too much loss.
If you figured out the correct PVs and offsets,
you should be able to recover it all.
I understand that the strategy is to reduce the declared size of PV1
so that LVM can enable the PV and I can mount the kvmfs LV. I'm not
expert at LVM, and while I can get some things done with it when there
are no problems, I'm out of my league when problems occur.
--
: Lars Ellenberg
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com
DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.
_______________________________________________
linux-lvm mailing list
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
Stuart D Gathman
2012-06-27 20:14:31 UTC
Permalink
Post by Richard Petty
Device Boot Start End Blocks Id System
/dev/sdc1 1 62360 209715200 8e Linux LVM
/dev/sdc2 62360 218259 524288793 8e Linux LVM
/dev/sdc3 218260 435191 729542316 8e Linux LVM
The last block of sdc1 is 62360 and the first block of sdc2 is 62360... the same block.
Those aren't blocks, those are cylinders. That just means that sdc2
starts in the middle of cylinder 62360.
Richard Petty
2012-06-27 20:47:32 UTC
Permalink
Thank you so very much for pointing that out. That's a relief!

--Richard
Post by Stuart D Gathman
Post by Richard Petty
Device Boot Start End Blocks Id System
/dev/sdc1 1 62360 209715200 8e Linux LVM
/dev/sdc2 62360 218259 524288793 8e Linux LVM
/dev/sdc3 218260 435191 729542316 8e Linux LVM
The last block of sdc1 is 62360 and the first block of sdc2 is 62360... the same block.
Those aren't blocks, those are cylinders. That just means that sdc2
starts in the middle of cylinder 62360.
_______________________________________________
linux-lvm mailing list
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
Richard Petty
2013-09-23 01:44:37 UTC
Permalink
Hey, gang (and Lars),

After a break, I have resumed work on recovering the data off of my corrupt LVM volume. I did just come across an interesting approach that another person used to get his data off of one of his LV's that displayed a similar error message when he attempted to mount it:

His: device-mapper: table: 253:2: md127 too small for target
Mine: device-mapper: table: 253:3: sdc2 too small for target

Although we got into our predicaments by different means (I think that incomplete LV resize was my undoing) I'm wondering if anyone here thinks that his brutish approach would work for me:

"I managed to get all my data back by deleting the LVM volumes and
recreating it without formatting the drives. I did have to run fsck on
my data volume, but all data was intact as far as I could see."

(His entire thread is here: http://comments.gmane.org/gmane.linux.lvm.general/13142)

The data that I'm looking to retrieve is on sdc1 so I would be thrilled to drop sdc2 from the logical volume altogether. Problem is that my ability to get anywhere with LVM is zero, given my corruptions issues, hence my interested in this guy's technique.

--Richard
Post by Lars Ellenberg
Post by Richard Petty
Sorry for the long break away from this topic....
Post by Lars Ellenberg
Post by Richard Petty
GOAL: Retrieve a KVM virtual machine from an inaccessible LVM volume.
DESCRIPTION: In November, I was working on a home server. The system
boots to software mirrored drives but I have a hardware-based RAID5
array on it and I decided to create a logical volume and mount it at
/var/lib/libvirt/images so that all my KVM virtual machine image
files would reside on the hardware RAID.
All that worked fine. Later, I decided to expand that
logical volume and that's when I made a mistake which wasn't
discovered until about six weeks later when I accidentally rebooted
the server. (Good problems usually require several mistakes.)
Somehow, I accidentally mis-specified the second LMV physical
volume that I added to the volume group. When trying to activate
LOG ENTRY
table: 253:3: sdc2 too small for target: start=2048, len=1048584192, dev_size=1048577586
As you can see, the length is greater than the device size.
I do not know how this could have happened. I assumed that LVM tool
sanity checking would have prevented this from happening.
PV0 is okay.
PV1 is defective.
PV2 is okay but too small to receive a PV1's contents, I think.
PV3 was just added, hoping to migrate PV1 contents to it.
So I added PV3 and tried to do a move but it seems that using some
of the LMV tools is predicated on the kernel being able to activate
everything, which it refuses to do.
Can't migrate the data, can't resize anything. I'm stuck. If course
I've done a lot of Google research over the months but I have yet to
see a problem such as this solved.
Got ideas?
Again, my goal is to pluck a copy of a 100GB virtual machine off of
the LV. After that, I'll delete the LV.
==========================
LMV REPORT FROM /etc/lvm/archive BEFORE THE CORRUPTION
vg_raid {
id = "JLeyHJ-saON-6NSF-4Hqc-1rTA-vOWE-CU5aDZ"
seqno = 2
status = ["RESIZEABLE", "READ", "WRITE"]
flags = []
extent_size = 8192 # 4 Megabytes
max_lv = 0
max_pv = 0
metadata_copies = 0
physical_volumes {
pv0 {
id = "QaF9P6-Q9ch-bFTa-O3z2-3Idi-SdIw-YMLkQI"
device = "/dev/sdc1" # Hint only
status = ["ALLOCATABLE"]
flags = []
dev_size = 419430400 # 200 Gigabytes
pe_start = 2048
that's number of sectors into /dev/sdc1 "Hint only"
Post by Richard Petty
pe_count = 51199 # 199.996 Gigabytes
}
}
logical_volumes {
kvmfs {
id = "Hs636n-PLcl-aivI-VbTe-CAls-Zul8-m2liRY"
status = ["READ", "WRITE", "VISIBLE"]
flags = []
segment_count = 1
segment1 {
start_extent = 0
extent_count = 50944 # 199 Gigabytes
And that tells us your kvmfs lv is
linear, not fragmented, and starting at extent 0.
Which is, as seen above, 2048 sectors into sdc1.
Try this, then look at /dev/mapper/maybe_kvmfs
echo "0 $[50944 * 8192] linear /dev/sdc1 2048" |
dmsetup create maybe_kvmfs
This did result in creating an entry at /dev/mapper/maybe_kvmfs.
Post by Lars Ellenberg
But see below...
Post by Richard Petty
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv0", 0
]
}
}
}
}
==========================
LMV REPORT FROM /etc/lvm/archive AS SEEN TODAY
vg_raid {
id = "JLeyHJ-saON-6NSF-4Hqc-1rTA-vOWE-CU5aDZ"
seqno = 13
status = ["RESIZEABLE", "READ", "WRITE"]
flags = []
extent_size = 8192 # 4 Megabytes
max_lv = 0
max_pv = 0
metadata_copies = 0
physical_volumes {
pv0 {
id = "QaF9P6-Q9ch-bFTa-O3z2-3Idi-SdIw-YMLkQI"
device = "/dev/sdc1" # Hint only
status = ["ALLOCATABLE"]
flags = []
dev_size = 419430400 # 200 Gigabytes
pe_start = 2048
pe_count = 51199 # 199.996 Gigabytes
}
pv1 {
id = "8o0Igh-DKC8-gsof-FuZX-2Irn-qekz-0Y2mM9"
device = "/dev/sdc2" # Hint only
status = ["ALLOCATABLE"]
flags = []
dev_size = 2507662218 # 1.16772 Terabytes
pe_start = 2048
pe_count = 306110 # 1.16772 Terabytes
}
pv2 {
id = "NuW7Bi-598r-cnLV-E1E8-Srjw-4oM4-77RJkU"
device = "/dev/sdb5" # Hint only
status = ["ALLOCATABLE"]
flags = []
dev_size = 859573827 # 409.877 Gigabytes
pe_start = 2048
pe_count = 104928 # 409.875 Gigabytes
}
pv3 {
id = "eL40Za-g3aS-92Uc-E0fT-mHrP-5rO6-HT7pKK"
device = "/dev/sdc3" # Hint only
status = ["ALLOCATABLE"]
flags = []
dev_size = 1459084632 # 695.746 Gigabytes
pe_start = 2048
pe_count = 178110 # 695.742 Gigabytes
}
}
logical_volumes {
kvmfs {
id = "Hs636n-PLcl-aivI-VbTe-CAls-Zul8-m2liRY"
status = ["READ", "WRITE", "VISIBLE"]
flags = []
segment_count = 2
Oops, why does it have two segments now?
That must have been your resize attempt.
Post by Richard Petty
segment1 {
start_extent = 0
extent_count = 51199 # 199.996 Gigabytes
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv0", 0
]
}
segment2 {
start_extent = 51199
extent_count = 128001 # 500.004 Gigabytes
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv1", 0
Fortunately simple again: two segments,
both starting at extent 0 of their respective pv.
echo "0 $[51199 * 8192] linear /dev/sdc1 2048
$[51199 * 8192] $[128001 * 8192] linear /dev/sdc2 2048" |
dmsetup create maybe_kvmfs
(now do some read-only sanity checks...)
device-mapper: create ioctl failed: Device or resource busy
Command failed
Well: you need to find out what to use as /dev/sdXY there, first,
you need to match your disks/partitions to the pvs.
Post by Richard Petty
Post by Lars Ellenberg
Of course you need to adjust sdc1 and sdc2 to
whatever is "right".
According to the meta data dump above,
"sdc1" is supposed to be your old 200 GB PV,
and "sdc2" the 1.6 TB partition.
The other PVs are "sdb5" (410 GB),
and a "sdc3" of 695 GB...
If "matching by size" did not work for you,
maybe "pvs -o +pv_uuid" gives sufficient clues
to be able to match them with the lvm meta data dump
above, and construct a working dmsetup line.
Post by Richard Petty
Post by Lars Ellenberg
If 128001 is too large, reduce until it fits.
If you broke the partition table,
and the partition offsets are now wrong,
you have to experiment a lot,
and hope for the best.
That will truncate the "kvmfs",
but should not cause too much loss.
If you figured out the correct PVs and offsets,
you should be able to recover it all.
I understand that the strategy is to reduce the declared size of PV1
so that LVM can enable the PV and I can mount the kvmfs LV. I'm not
expert at LVM, and while I can get some things done with it when there
are no problems, I'm out of my league when problems occur.
--
: Lars Ellenberg
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com
Richard Petty
2015-09-04 19:22:46 UTC
Permalink
Okay, two years have past by but last week this problem was fixed, at
least well enough to retrieve files from.

The solution turned out to be simple: Delete the second (unused) PV
from the volume group and let LVM recalculate the new LV size. I didn't
have the nerve to do that but a co-worker did. It still shows a funky
max size in one utility but could be mounted without throwing any errors
or warnings.

--Richard
Post by Richard Petty
Hey, gang (and Lars),
After a break, I have resumed work on recovering the data off of my
corrupt LVM volume. I did just come across an interesting approach
that another person used to get his data off of one of his LV's that
His: device-mapper: table: 253:2: md127 too small for target
Mine: device-mapper: table: 253:3: sdc2 too small for target
Although we got into our predicaments by different means (I think
that incomplete LV resize was my undoing) I'm wondering if anyone
here
"I managed to get all my data back by deleting the LVM volumes and
recreating it without formatting the drives. I did have to run fsck on
my data volume, but all data was intact as far as I could see."
http://comments.gmane.org/gmane.linux.lvm.general/13142)
The data that I'm looking to retrieve is on sdc1 so I would be
thrilled to drop sdc2 from the logical volume altogether. Problem is
that my ability to get anywhere with LVM is zero, given my
corruptions
issues, hence my interested in this guy's technique.
--Richard
On Mar 20, 2012, at 3:32 PM, Lars Ellenberg
Post by Lars Ellenberg
Post by Richard Petty
Sorry for the long break away from this topic....
Post by Lars Ellenberg
Post by Richard Petty
GOAL: Retrieve a KVM virtual machine from an inaccessible LVM volume.
DESCRIPTION: In November, I was working on a home server. The system
boots to software mirrored drives but I have a hardware-based RAID5
array on it and I decided to create a logical volume and mount it at
/var/lib/libvirt/images so that all my KVM virtual machine image
files would reside on the hardware RAID.
All that worked fine. Later, I decided to expand that
logical volume and that's when I made a mistake which wasn't
discovered until about six weeks later when I accidentally
rebooted
the server. (Good problems usually require several mistakes.)
Somehow, I accidentally mis-specified the second LMV physical
volume that I added to the volume group. When trying to activate
LOG ENTRY
table: 253:3: sdc2 too small for target: start=2048,
len=1048584192, dev_size=1048577586
As you can see, the length is greater than the device size.
I do not know how this could have happened. I assumed that LVM tool
sanity checking would have prevented this from happening.
PV0 is okay.
PV1 is defective.
PV2 is okay but too small to receive a PV1's contents, I think.
PV3 was just added, hoping to migrate PV1 contents to it.
So I added PV3 and tried to do a move but it seems that using some
of the LMV tools is predicated on the kernel being able to
activate
everything, which it refuses to do.
Can't migrate the data, can't resize anything. I'm stuck. If course
I've done a lot of Google research over the months but I have yet to
see a problem such as this solved.
Got ideas?
Again, my goal is to pluck a copy of a 100GB virtual machine off of
the LV. After that, I'll delete the LV.
==========================
LMV REPORT FROM /etc/lvm/archive BEFORE THE CORRUPTION
vg_raid {
id = "JLeyHJ-saON-6NSF-4Hqc-1rTA-vOWE-CU5aDZ"
seqno = 2
status = ["RESIZEABLE", "READ", "WRITE"]
flags = []
extent_size = 8192 # 4 Megabytes
max_lv = 0
max_pv = 0
metadata_copies = 0
physical_volumes {
pv0 {
id = "QaF9P6-Q9ch-bFTa-O3z2-3Idi-SdIw-YMLkQI"
device = "/dev/sdc1" # Hint only
status = ["ALLOCATABLE"]
flags = []
dev_size = 419430400 # 200 Gigabytes
pe_start = 2048
that's number of sectors into /dev/sdc1 "Hint only"
Post by Richard Petty
pe_count = 51199 # 199.996 Gigabytes
}
}
logical_volumes {
kvmfs {
id = "Hs636n-PLcl-aivI-VbTe-CAls-Zul8-m2liRY"
status = ["READ", "WRITE", "VISIBLE"]
flags = []
segment_count = 1
segment1 {
start_extent = 0
extent_count = 50944 # 199 Gigabytes
And that tells us your kvmfs lv is
linear, not fragmented, and starting at extent 0.
Which is, as seen above, 2048 sectors into sdc1.
Try this, then look at /dev/mapper/maybe_kvmfs
echo "0 $[50944 * 8192] linear /dev/sdc1 2048" |
dmsetup create maybe_kvmfs
This did result in creating an entry at /dev/mapper/maybe_kvmfs.
Post by Lars Ellenberg
But see below...
Post by Richard Petty
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv0", 0
]
}
}
}
}
==========================
LMV REPORT FROM /etc/lvm/archive AS SEEN TODAY
vg_raid {
id = "JLeyHJ-saON-6NSF-4Hqc-1rTA-vOWE-CU5aDZ"
seqno = 13
status = ["RESIZEABLE", "READ", "WRITE"]
flags = []
extent_size = 8192 # 4 Megabytes
max_lv = 0
max_pv = 0
metadata_copies = 0
physical_volumes {
pv0 {
id = "QaF9P6-Q9ch-bFTa-O3z2-3Idi-SdIw-YMLkQI"
device = "/dev/sdc1" # Hint only
status = ["ALLOCATABLE"]
flags = []
dev_size = 419430400 # 200 Gigabytes
pe_start = 2048
pe_count = 51199 # 199.996 Gigabytes
}
pv1 {
id = "8o0Igh-DKC8-gsof-FuZX-2Irn-qekz-0Y2mM9"
device = "/dev/sdc2" # Hint only
status = ["ALLOCATABLE"]
flags = []
dev_size = 2507662218 # 1.16772 Terabytes
pe_start = 2048
pe_count = 306110 # 1.16772 Terabytes
}
pv2 {
id = "NuW7Bi-598r-cnLV-E1E8-Srjw-4oM4-77RJkU"
device = "/dev/sdb5" # Hint only
status = ["ALLOCATABLE"]
flags = []
dev_size = 859573827 # 409.877 Gigabytes
pe_start = 2048
pe_count = 104928 # 409.875 Gigabytes
}
pv3 {
id = "eL40Za-g3aS-92Uc-E0fT-mHrP-5rO6-HT7pKK"
device = "/dev/sdc3" # Hint only
status = ["ALLOCATABLE"]
flags = []
dev_size = 1459084632 # 695.746 Gigabytes
pe_start = 2048
pe_count = 178110 # 695.742 Gigabytes
}
}
logical_volumes {
kvmfs {
id = "Hs636n-PLcl-aivI-VbTe-CAls-Zul8-m2liRY"
status = ["READ", "WRITE", "VISIBLE"]
flags = []
segment_count = 2
Oops, why does it have two segments now?
That must have been your resize attempt.
Post by Richard Petty
segment1 {
start_extent = 0
extent_count = 51199 # 199.996 Gigabytes
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv0", 0
]
}
segment2 {
start_extent = 51199
extent_count = 128001 # 500.004 Gigabytes
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv1", 0
Fortunately simple again: two segments,
both starting at extent 0 of their respective pv.
echo "0 $[51199 * 8192] linear /dev/sdc1 2048
$[51199 * 8192] $[128001 * 8192] linear /dev/sdc2 2048" |
dmsetup create maybe_kvmfs
(now do some read-only sanity checks...)
2048 $[51199 * 8192] $[127999 * 8192] linear /dev/sdc2 2048" |
dmsetup create kvmfs
device-mapper: create ioctl failed: Device or resource busy
Command failed
Well: you need to find out what to use as /dev/sdXY there, first,
you need to match your disks/partitions to the pvs.
Post by Richard Petty
Post by Lars Ellenberg
Of course you need to adjust sdc1 and sdc2 to
whatever is "right".
According to the meta data dump above,
"sdc1" is supposed to be your old 200 GB PV,
and "sdc2" the 1.6 TB partition.
The other PVs are "sdb5" (410 GB),
and a "sdc3" of 695 GB...
If "matching by size" did not work for you,
maybe "pvs -o +pv_uuid" gives sufficient clues
to be able to match them with the lvm meta data dump
above, and construct a working dmsetup line.
Post by Richard Petty
Post by Lars Ellenberg
If 128001 is too large, reduce until it fits.
If you broke the partition table,
and the partition offsets are now wrong,
you have to experiment a lot,
and hope for the best.
That will truncate the "kvmfs",
but should not cause too much loss.
If you figured out the correct PVs and offsets,
you should be able to recover it all.
I understand that the strategy is to reduce the declared size of PV1
so that LVM can enable the PV and I can mount the kvmfs LV. I'm not
expert at LVM, and while I can get some things done with it when there
are no problems, I'm out of my league when problems occur.
--
: Lars Ellenberg
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com
Continue reading on narkive:
Loading...