Discussion:
[linux-lvm] How to trash a broke VG
Jeff Allison
2018-08-03 05:21:36 UTC
Permalink
OK Chaps I've broken it.

I have a VG containing one LV and made up of 3 live disks and 2 failed disks.

Whilst the disks were failing I attempted to move date off the failing
disks, which failed so I now have a pvmove0 that won't go away either.


So if I attempt to even remove a live disk I get an error.

[***@nas ~]# vgreduce -v vg_backup /dev/sdi1
Using physical volume(s) on command line.
Wiping cache of LVM-capable devices
Wiping internal VG cache
Couldn't find device with uuid eFMoUW-6Ml5-fTyn-E2cT-sFXu-kYla-MmiAUV.
Couldn't find device with uuid LxXhsb-Mgag-ESXZ-fPP6-52dE-iNp2-uKdLwu.
There are 2 physical volumes missing.
Cannot change VG vg_backup while PVs are missing.
Consider vgreduce --removemissing.
There are 2 physical volumes missing.
Cannot process volume group vg_backup
Failed to find physical volume "/dev/sdi1".

Then if I attempt a vgreduce --removemissing I get

[***@nas ~]# vgreduce --removemissing vg_backup
Couldn't find device with uuid eFMoUW-6Ml5-fTyn-E2cT-sFXu-kYla-MmiAUV.
Couldn't find device with uuid LxXhsb-Mgag-ESXZ-fPP6-52dE-iNp2-uKdLwu.
WARNING: Partial LV lv_backup needs to be repaired or removed.
WARNING: Partial LV pvmove0 needs to be repaired or removed.
There are still partial LVs in VG vg_backup.
To remove them unconditionally use: vgreduce --removemissing --force.
Proceeding to remove empty missing PVs.

So I try force
[***@nas ~]# vgreduce --removemissing --force vg_backup
Couldn't find device with uuid eFMoUW-6Ml5-fTyn-E2cT-sFXu-kYla-MmiAUV.
Couldn't find device with uuid LxXhsb-Mgag-ESXZ-fPP6-52dE-iNp2-uKdLwu.
Removing partial LV lv_backup.
Can't remove locked LV lv_backup.

So no go.

If I try lvremove pvmove0

[***@nas ~]# lvremove -v pvmove0
Using logical volume(s) on command line.
VG name on command line not found in list of VGs: pvmove0
Wiping cache of LVM-capable devices
Volume group "pvmove0" not found
Cannot process volume group pvmove0

So Heeelp I seem to be caught in some kind of loop.
Roger Heflin
2018-08-03 13:18:29 UTC
Permalink
Assuming you want to completely eliminate the vg so that you can
rebuild it from scratch and the lv's are no longer mounted, then this
should work: IF the lv is mounted you should remove it from fstab and
reboot, and see what state it comes up in and first attempt to
vgchange it off as that is cleaner than doing the dmsetup tricks.

If you cannot get the lv's lvchanged to off such that the
/dev/<vgname> is empty or non-existant, then this is a lower level way
(this still requires the device to be un-mounted, if mounted the
command will fail).

dmsetup table | grep <vgname>

Then dmsetup remove <lvnamefromabove> (until all component lv's are
removed, this should empty the /dev/vgname/ directory of all devices.

Once in this state you can use the pvremove command with the extra
force options, it will tell you what vg it was part of and require you
to answer y or n.

I have had to do this a number of times when events have happened
causing disks to be lost/died/corrupted.



On Fri, Aug 3, 2018 at 12:21 AM, Jeff Allison
Post by Jeff Allison
OK Chaps I've broken it.
I have a VG containing one LV and made up of 3 live disks and 2 failed disks.
Whilst the disks were failing I attempted to move date off the failing
disks, which failed so I now have a pvmove0 that won't go away either.
So if I attempt to even remove a live disk I get an error.
Using physical volume(s) on command line.
Wiping cache of LVM-capable devices
Wiping internal VG cache
Couldn't find device with uuid eFMoUW-6Ml5-fTyn-E2cT-sFXu-kYla-MmiAUV.
Couldn't find device with uuid LxXhsb-Mgag-ESXZ-fPP6-52dE-iNp2-uKdLwu.
There are 2 physical volumes missing.
Cannot change VG vg_backup while PVs are missing.
Consider vgreduce --removemissing.
There are 2 physical volumes missing.
Cannot process volume group vg_backup
Failed to find physical volume "/dev/sdi1".
Then if I attempt a vgreduce --removemissing I get
Couldn't find device with uuid eFMoUW-6Ml5-fTyn-E2cT-sFXu-kYla-MmiAUV.
Couldn't find device with uuid LxXhsb-Mgag-ESXZ-fPP6-52dE-iNp2-uKdLwu.
WARNING: Partial LV lv_backup needs to be repaired or removed.
WARNING: Partial LV pvmove0 needs to be repaired or removed.
There are still partial LVs in VG vg_backup.
To remove them unconditionally use: vgreduce --removemissing --force.
Proceeding to remove empty missing PVs.
So I try force
Couldn't find device with uuid eFMoUW-6Ml5-fTyn-E2cT-sFXu-kYla-MmiAUV.
Couldn't find device with uuid LxXhsb-Mgag-ESXZ-fPP6-52dE-iNp2-uKdLwu.
Removing partial LV lv_backup.
Can't remove locked LV lv_backup.
So no go.
If I try lvremove pvmove0
Using logical volume(s) on command line.
VG name on command line not found in list of VGs: pvmove0
Wiping cache of LVM-capable devices
Volume group "pvmove0" not found
Cannot process volume group pvmove0
So Heeelp I seem to be caught in some kind of loop.
_______________________________________________
linux-lvm mailing list
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
Continue reading on narkive:
Loading...