Discussion:
[linux-lvm] Missing Logical Volumes
G Crowe
2014-12-19 10:32:25 UTC
Permalink
After rebooting, some of my logical volumes did not have device files.

/dev/array1/LVpics
and
/dev/mapper/array1-LVpics
did not exist but the output of "lvdisplay" said that the volume was
available (see below).

vgscan did not resolve the problem.

I was able to regain access to the LV by renaming it, then renaming it
back...
[***@host1 ~]# lvrename /dev/array1/LVpics /dev/array1/LVpicsnew
Renamed "LVpics" to "LVpicsnew" in volume group "array1"
[***@host1 ~]# lvrename /dev/array1/LVpicsnew /dev/array1/LVpics
Renamed "LVpicsnew" to "LVpics" in volume group "array1"

There are 29 LVs in the VG and 25 of them came up OK and 4 had this
problem. Note that there is only one single PV (a RAID6 array) in the
VG, and there are two VGs on the machine.

Is this expected behaviour, or is it something I should be worried about?



--- Logical volume ---
LV Path /dev/array1/LVpics
LV Name LVpics
VG Name array1
LV UUID WH7g9u-Ls7J-fIpQ-Hk2p-mUuH-QRKf-9uxcM2
LV Write Access read/write
LV Creation host, time example.com, 2013-11-26 07:29:51 +1100
LV Status available
# open 0
LV Size 350.00 GiB
Current LE 89600
Segments 2
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:9


I am running Fedora 19 with kernel 3.11.9-200.fc19.x86_64


Thanks

GC
Jack Waterworth
2014-12-19 16:46:29 UTC
Permalink
It sounds like the VG was not activated. You can activate it with the
following command:

# vgchange -ay array1

Jack Waterworth, Red Hat Certified Architect
Senior Storage Technical Support Engineer
Red Hat Global Support Services ( 1.888.467.3342 )
Post by G Crowe
After rebooting, some of my logical volumes did not have device files.
/dev/array1/LVpics
and
/dev/mapper/array1-LVpics
did not exist but the output of "lvdisplay" said that the volume was
available (see below).
vgscan did not resolve the problem.
I was able to regain access to the LV by renaming it, then renaming it
back...
Renamed "LVpics" to "LVpicsnew" in volume group "array1"
Renamed "LVpicsnew" to "LVpics" in volume group "array1"
There are 29 LVs in the VG and 25 of them came up OK and 4 had this
problem. Note that there is only one single PV (a RAID6 array) in the
VG, and there are two VGs on the machine.
Is this expected behaviour, or is it something I should be worried about?
--- Logical volume ---
LV Path /dev/array1/LVpics
LV Name LVpics
VG Name array1
LV UUID WH7g9u-Ls7J-fIpQ-Hk2p-mUuH-QRKf-9uxcM2
LV Write Access read/write
LV Creation host, time example.com, 2013-11-26 07:29:51 +1100
LV Status available
# open 0
LV Size 350.00 GiB
Current LE 89600
Segments 2
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:9
I am running Fedora 19 with kernel 3.11.9-200.fc19.x86_64
Thanks
GC
_______________________________________________
linux-lvm mailing list
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
G Crowe
2014-12-19 23:07:38 UTC
Permalink
No, this didn't work.

[***@host1 ~]# vgchange -ay array1
29 logical volume(s) in volume group "array1" now active

And the missing /dev/mapper files were not created (I left one LV
un-fixed to try any suggested solutions)

All of the other LVs in the same VG are completely usable, so it doesn't
seem to be a problem with the VG as a whole.


Thanks

GC
Post by Jack Waterworth
It sounds like the VG was not activated. You can activate it with the
# vgchange -ay array1
Jack Waterworth, Red Hat Certified Architect
Senior Storage Technical Support Engineer
Red Hat Global Support Services ( 1.888.467.3342 )
Post by G Crowe
After rebooting, some of my logical volumes did not have device files.
/dev/array1/LVpics
and
/dev/mapper/array1-LVpics
did not exist but the output of "lvdisplay" said that the volume was
available (see below).
vgscan did not resolve the problem.
I was able to regain access to the LV by renaming it, then renaming
it back...
Renamed "LVpics" to "LVpicsnew" in volume group "array1"
Renamed "LVpicsnew" to "LVpics" in volume group "array1"
There are 29 LVs in the VG and 25 of them came up OK and 4 had this
problem. Note that there is only one single PV (a RAID6 array) in the
VG, and there are two VGs on the machine.
Is this expected behaviour, or is it something I should be worried about?
--- Logical volume ---
LV Path /dev/array1/LVpics
LV Name LVpics
VG Name array1
LV UUID WH7g9u-Ls7J-fIpQ-Hk2p-mUuH-QRKf-9uxcM2
LV Write Access read/write
LV Creation host, time example.com, 2013-11-26 07:29:51 +1100
LV Status available
# open 0
LV Size 350.00 GiB
Current LE 89600
Segments 2
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:9
I am running Fedora 19 with kernel 3.11.9-200.fc19.x86_64
Thanks
GC
_______________________________________________
linux-lvm mailing list
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
_______________________________________________
linux-lvm mailing list
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
m***@bidmc.harvard.edu
2014-12-24 13:51:44 UTC
Permalink
If your VG was not activated to begin with, you could not see any LV at all so "vgchange -a y" was not the way to go.

If you still have the issue, please attach the results of:

# vgdisplay -v /dev/array1
# grep -I filter /etc/lvm/lvmconf
# vgscan
# ls -al /dev/mapper


-----Original Message-----
From: linux-lvm-***@redhat.com [mailto:linux-lvm-***@redhat.com] On Behalf Of G Crowe
Sent: Friday, December 19, 2014 6:08 PM
To: LVM general discussion and development
Subject: Re: [linux-lvm] Missing Logical Volumes


No, this didn't work.

[***@host1 ~]# vgchange -ay array1
29 logical volume(s) in volume group "array1" now active

And the missing /dev/mapper files were not created (I left one LV un-fixed to try any suggested solutions)

All of the other LVs in the same VG are completely usable, so it doesn't seem to be a problem with the VG as a whole.


Thanks

GC
Post by Jack Waterworth
It sounds like the VG was not activated. You can activate it with the
# vgchange -ay array1
Jack Waterworth, Red Hat Certified Architect
Senior Storage Technical Support Engineer
Red Hat Global Support Services ( 1.888.467.3342 )
Post by G Crowe
After rebooting, some of my logical volumes did not have device files.
/dev/array1/LVpics
and
/dev/mapper/array1-LVpics
did not exist but the output of "lvdisplay" said that the volume was
available (see below).
vgscan did not resolve the problem.
I was able to regain access to the LV by renaming it, then renaming
it back...
Renamed "LVpics" to "LVpicsnew" in volume group "array1"
Renamed "LVpicsnew" to "LVpics" in volume group "array1"
There are 29 LVs in the VG and 25 of them came up OK and 4 had this
problem. Note that there is only one single PV (a RAID6 array) in the
VG, and there are two VGs on the machine.
Is this expected behaviour, or is it something I should be worried about?
--- Logical volume ---
LV Path /dev/array1/LVpics
LV Name LVpics
VG Name array1
LV UUID WH7g9u-Ls7J-fIpQ-Hk2p-mUuH-QRKf-9uxcM2
LV Write Access read/write
LV Creation host, time example.com, 2013-11-26 07:29:51 +1100
LV Status available
# open 0
LV Size 350.00 GiB
Current LE 89600
Segments 2
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:9
I am running Fedora 19 with kernel 3.11.9-200.fc19.x86_64
Thanks
GC
_______________________________________________
linux-lvm mailing list
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
_______________________________________________
linux-lvm mailing list
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
_______________________________________________
linux-lvm mailing list
linux-***@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

This message is intended for the use of the person(s) to whom it may be addressed. It may contain information that is privileged, confidential, or otherwise protected from disclosure under applicable law. If you are not the intended recipient, any dissemination, distribution, copying, or use of this information is prohibited. If you have received this message in error, please permanently delete it and immediately notify the sender. Thank you.
Graham Crowe
2015-01-02 09:16:58 UTC
Permalink
Thanks for the reply, however I had to reboot the server and this has cleared
the problem. If it reoccurs, then I will save the output as per your suggestion
(I'd like to find a solution that doesn't involve rebooting as this server has
many Xen virtual machines which are a pain to reboot).


Thanks for your help.
Post by m***@bidmc.harvard.edu
If your VG was not activated to begin with, you could not see any LV at all so "vgchange -a y" was not the way to go.
# vgdisplay -v /dev/array1
# grep -I filter /etc/lvm/lvmconf
# vgscan
# ls -al /dev/mapper
-----Original Message-----
Sent: Friday, December 19, 2014 6:08 PM
To: LVM general discussion and development
Subject: Re: [linux-lvm] Missing Logical Volumes
No, this didn't work.
29 logical volume(s) in volume group "array1" now active
And the missing /dev/mapper files were not created (I left one LV un-fixed to try any suggested solutions)
All of the other LVs in the same VG are completely usable, so it doesn't seem to be a problem with the VG as a whole.
Thanks
GC
Post by Jack Waterworth
It sounds like the VG was not activated. You can activate it with the
# vgchange -ay array1
Jack Waterworth, Red Hat Certified Architect
Senior Storage Technical Support Engineer
Red Hat Global Support Services ( 1.888.467.3342 )
Post by G Crowe
After rebooting, some of my logical volumes did not have device files.
/dev/array1/LVpics
and
/dev/mapper/array1-LVpics
did not exist but the output of "lvdisplay" said that the volume was
available (see below).
vgscan did not resolve the problem.
I was able to regain access to the LV by renaming it, then renaming
it back...
Renamed "LVpics" to "LVpicsnew" in volume group "array1"
Renamed "LVpicsnew" to "LVpics" in volume group "array1"
There are 29 LVs in the VG and 25 of them came up OK and 4 had this
problem. Note that there is only one single PV (a RAID6 array) in the
VG, and there are two VGs on the machine.
Is this expected behaviour, or is it something I should be worried about?
--- Logical volume ---
LV Path /dev/array1/LVpics
LV Name LVpics
VG Name array1
LV UUID WH7g9u-Ls7J-fIpQ-Hk2p-mUuH-QRKf-9uxcM2
LV Write Access read/write
LV Creation host, time example.com, 2013-11-26 07:29:51 +1100
LV Status available
# open 0
LV Size 350.00 GiB
Current LE 89600
Segments 2
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:9
I am running Fedora 19 with kernel 3.11.9-200.fc19.x86_64
Thanks
GC
_______________________________________________
linux-lvm mailing list
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
_______________________________________________
linux-lvm mailing list
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
_______________________________________________
linux-lvm mailing list
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
This message is intended for the use of the person(s) to whom it may be addressed. It may contain information that is privileged, confidential, or otherwise protected from disclosure under applicable law. If you are not the intended recipient, any dissemination, distribution, copying, or use of this information is prohibited. If you have received this message in error, please permanently delete it and immediately notify the sender. Thank you.
_______________________________________________
linux-lvm mailing list
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
Continue reading on narkive:
Loading...