Ryan Davis
2014-04-04 21:32:40 UTC
Hi,
I have 3 drives in a RAID 5 configuration as a LVM volume. These disks
contain /home
After performing a shutdown and moving the computer I can't get the drives
to mount automatically.
This is all new to me so I am not sure if this is a LVM issue but any help
is appreciated. LVS shows I have a mapped device present without tables.
When I try to mount the volume to home this happens:
[***@hobbes ~]# mount -t ext4 /dev/vg_data/lv_home /home
mount: wrong fs type, bad option, bad superblock on /dev/vg_data/lv_home,
missing codepage or other error
(could this be the IDE device where you in fact use
ide-scsi so that sr0 or sda or so is needed?)
In some cases useful info is found in syslog - try
dmesg | tail or so
[***@hobbes ~]# dmesg | tail
EXT4-fs (dm-0): unable to read superblock
[***@hobbes ~]# fsck.ext4 -v /dev/sdc1
e4fsck 1.41.12 (17-May-2010)
fsck.ext4: Superblock invalid, trying backup blocks...
fsck.ext4: Bad magic number in super-block while trying to open /dev/sdc1
The superblock could not be read or does not describe a correct ext2
filesystem. If the device is valid and it really contains an ext2
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e4fsck with an alternate superblock:
e4fsck -b 8193 <device>
[***@hobbes ~]# mke2fs -n /dev/sdc1
mke2fs 1.39 (29-May-2006)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
488292352 inodes, 976555199 blocks
48827759 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
29803 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632,
2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000, 214990848, 512000000, 550731776, 644972544
Is the superblock issue causing the lvm issues?
Thanks for any input you might have.
Here are useful outputs about the system.
Here are some of the packages installed
#rpm -qa | egrep -i '(kernel|lvm2|device-mapper)'
device-mapper-1.02.67-2.el5
kernel-devel-2.6.18-348.18.1.el5
device-mapper-event-1.02.67-2.el5
kernel-headers-2.6.18-371.6.1.el5
lvm2-2.02.88-12.el5
device-mapper-1.02.67-2.el5
kernel-devel-2.6.18-371.3.1.el5
device-mapper-multipath-0.4.7-59.el5
kernel-2.6.18-371.6.1.el5
kernel-devel-2.6.18-371.6.1.el5
kernel-2.6.18-371.3.1.el5
kernel-2.6.18-348.18.1.el5
lvm2-cluster-2.02.88-9.el5_10.2
#uname -a
Linux hobbes 2.6.18-371.6.1.el5 #1 SMP Wed Mar 12 20:03:51 EDT 2014 x86_64
x86_64 x86_64 GNU/Linux
LVM info:
#vgs
VG #PV #LV #SN Attr VSize VFree
vg_data 1 1 0 wz--n- 3.64T 0
#lvs
LV VG Attr LSize Origin Snap% Move Log Copy% Convert
lv_home vg_data -wi-d- 3.64T
Looks like I have a mapped device present without tables (d) attribute.
#pvs
PV VG Fmt Attr PSize PFree
/dev/sdc1 vg_data lvm2 a-- 3.64T 0
#ls /dev/vg_data
lv_home
#vgscan --mknodes
Reading all physical volumes. This may take a while...
Found volume group "vg_data" using metadata type lvm2
#pvscan
PV /dev/sdc1 VG vg_data lvm2 [3.64 TB / 0 free]
Total: 1 [3.64 TB] / in use: 1 [3.64 TB] / in no VG: 0 [0 ]
#vgchange -ay
1 logical volume(s) in volume group "vg_data" now active
device-mapper: ioctl: error adding target to table
#dmesg |tail
device-mapper: table: device 8:33 too small for target
device-mapper: table: 253:0: linear: dm-linear: Device lookup failed
device-mapper: ioctl: error adding target to table
#vgdisplay -v
--- Volume group ---
VG Name vg_data
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 2
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size 3.64 TB
PE Size 4.00 MB
Total PE 953668
Alloc PE / Size 953668 / 3.64 TB
Free PE / Size 0 / 0
VG UUID b2w9mR-hvSc-Rm0k-3yHL-iEgc-6nMq-uq69E1
--- Logical volume ---
LV Name /dev/vg_data/lv_home
VG Name vg_data
LV UUID 13TmTm-YqIo-6xIp-1NHf-AJTu-9ImE-SHwLz6
LV Write Access read/write
LV Status available
# open 0
LV Size 3.64 TB
Current LE 953668
Segments 1
Allocation inherit
Read ahead sectors 16384
- currently set to 256
Block device 253:0
--- Physical volumes ---
PV Name /dev/sdc1
PV UUID 8D67bX-xg4s-QRy1-4E8n-XfiR-0C2r-Oi1Blf
PV Status allocatable
Total PE / Free PE 953668 / 0
#lvscan
ACTIVE '/dev/vg_data/lv_home' [3.64 TB] inherit
#partprobe -s
/dev/sda: msdos partitions 1 2 3 4 <5 6 7 8 9 10>
/dev/sdb: msdos partitions 1 2 3 4 <5 6 7 8 9 10>
/dev/sdc: gpt partitions 1
#dmsetup table
vg_data-lv_home:
#dmsetup ls
vg_data-lv_home (253, 0)
#lvdisplay -m
--- Logical volume ---
LV Name /dev/vg_data/lv_home
VG Name vg_data
LV UUID 13TmTm-YqIo-6xIp-1NHf-AJTu-9ImE-SHwLz6
LV Write Access read/write
LV Status available
# open 0
LV Size 3.64 TB
Current LE 953668
Segments 1
Allocation inherit
Read ahead sectors 16384
- currently set to 256
Block device 253:0
--- Segments ---
Logical extent 0 to 953667:
Type linear
Physical volume /dev/sdc1
Physical extents 0 to 953667
Here is a link to files outputted by lvmdump:
https://www.dropbox.com/sh/isg4fdmthiyoszh/tyYOfqllya
I have 3 drives in a RAID 5 configuration as a LVM volume. These disks
contain /home
After performing a shutdown and moving the computer I can't get the drives
to mount automatically.
This is all new to me so I am not sure if this is a LVM issue but any help
is appreciated. LVS shows I have a mapped device present without tables.
When I try to mount the volume to home this happens:
[***@hobbes ~]# mount -t ext4 /dev/vg_data/lv_home /home
mount: wrong fs type, bad option, bad superblock on /dev/vg_data/lv_home,
missing codepage or other error
(could this be the IDE device where you in fact use
ide-scsi so that sr0 or sda or so is needed?)
In some cases useful info is found in syslog - try
dmesg | tail or so
[***@hobbes ~]# dmesg | tail
EXT4-fs (dm-0): unable to read superblock
[***@hobbes ~]# fsck.ext4 -v /dev/sdc1
e4fsck 1.41.12 (17-May-2010)
fsck.ext4: Superblock invalid, trying backup blocks...
fsck.ext4: Bad magic number in super-block while trying to open /dev/sdc1
The superblock could not be read or does not describe a correct ext2
filesystem. If the device is valid and it really contains an ext2
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e4fsck with an alternate superblock:
e4fsck -b 8193 <device>
[***@hobbes ~]# mke2fs -n /dev/sdc1
mke2fs 1.39 (29-May-2006)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
488292352 inodes, 976555199 blocks
48827759 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
29803 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632,
2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000, 214990848, 512000000, 550731776, 644972544
Is the superblock issue causing the lvm issues?
Thanks for any input you might have.
Here are useful outputs about the system.
Here are some of the packages installed
#rpm -qa | egrep -i '(kernel|lvm2|device-mapper)'
device-mapper-1.02.67-2.el5
kernel-devel-2.6.18-348.18.1.el5
device-mapper-event-1.02.67-2.el5
kernel-headers-2.6.18-371.6.1.el5
lvm2-2.02.88-12.el5
device-mapper-1.02.67-2.el5
kernel-devel-2.6.18-371.3.1.el5
device-mapper-multipath-0.4.7-59.el5
kernel-2.6.18-371.6.1.el5
kernel-devel-2.6.18-371.6.1.el5
kernel-2.6.18-371.3.1.el5
kernel-2.6.18-348.18.1.el5
lvm2-cluster-2.02.88-9.el5_10.2
#uname -a
Linux hobbes 2.6.18-371.6.1.el5 #1 SMP Wed Mar 12 20:03:51 EDT 2014 x86_64
x86_64 x86_64 GNU/Linux
LVM info:
#vgs
VG #PV #LV #SN Attr VSize VFree
vg_data 1 1 0 wz--n- 3.64T 0
#lvs
LV VG Attr LSize Origin Snap% Move Log Copy% Convert
lv_home vg_data -wi-d- 3.64T
Looks like I have a mapped device present without tables (d) attribute.
#pvs
PV VG Fmt Attr PSize PFree
/dev/sdc1 vg_data lvm2 a-- 3.64T 0
#ls /dev/vg_data
lv_home
#vgscan --mknodes
Reading all physical volumes. This may take a while...
Found volume group "vg_data" using metadata type lvm2
#pvscan
PV /dev/sdc1 VG vg_data lvm2 [3.64 TB / 0 free]
Total: 1 [3.64 TB] / in use: 1 [3.64 TB] / in no VG: 0 [0 ]
#vgchange -ay
1 logical volume(s) in volume group "vg_data" now active
device-mapper: ioctl: error adding target to table
#dmesg |tail
device-mapper: table: device 8:33 too small for target
device-mapper: table: 253:0: linear: dm-linear: Device lookup failed
device-mapper: ioctl: error adding target to table
#vgdisplay -v
--- Volume group ---
VG Name vg_data
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 2
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size 3.64 TB
PE Size 4.00 MB
Total PE 953668
Alloc PE / Size 953668 / 3.64 TB
Free PE / Size 0 / 0
VG UUID b2w9mR-hvSc-Rm0k-3yHL-iEgc-6nMq-uq69E1
--- Logical volume ---
LV Name /dev/vg_data/lv_home
VG Name vg_data
LV UUID 13TmTm-YqIo-6xIp-1NHf-AJTu-9ImE-SHwLz6
LV Write Access read/write
LV Status available
# open 0
LV Size 3.64 TB
Current LE 953668
Segments 1
Allocation inherit
Read ahead sectors 16384
- currently set to 256
Block device 253:0
--- Physical volumes ---
PV Name /dev/sdc1
PV UUID 8D67bX-xg4s-QRy1-4E8n-XfiR-0C2r-Oi1Blf
PV Status allocatable
Total PE / Free PE 953668 / 0
#lvscan
ACTIVE '/dev/vg_data/lv_home' [3.64 TB] inherit
#partprobe -s
/dev/sda: msdos partitions 1 2 3 4 <5 6 7 8 9 10>
/dev/sdb: msdos partitions 1 2 3 4 <5 6 7 8 9 10>
/dev/sdc: gpt partitions 1
#dmsetup table
vg_data-lv_home:
#dmsetup ls
vg_data-lv_home (253, 0)
#lvdisplay -m
--- Logical volume ---
LV Name /dev/vg_data/lv_home
VG Name vg_data
LV UUID 13TmTm-YqIo-6xIp-1NHf-AJTu-9ImE-SHwLz6
LV Write Access read/write
LV Status available
# open 0
LV Size 3.64 TB
Current LE 953668
Segments 1
Allocation inherit
Read ahead sectors 16384
- currently set to 256
Block device 253:0
--- Segments ---
Logical extent 0 to 953667:
Type linear
Physical volume /dev/sdc1
Physical extents 0 to 953667
Here is a link to files outputted by lvmdump:
https://www.dropbox.com/sh/isg4fdmthiyoszh/tyYOfqllya