Discussion:
[linux-lvm] Problem with mount QNAP disks in Linux
Daniel Łaskowski
2017-06-01 11:35:50 UTC
Permalink
Hello,

I'm trying mount three disks from QNAP TS-469L in Linux (Fedora 25).
System correctly recognised disks and RAID5, but I have problem with
activate LVM with data.

In QNAP QTS system it was look this way:
Storage Pool 1 5,44 TB
├─DataVol1 (System) 5,38 TB (Free Size: ~300,00 GB)
└─DataVol2 509,46 GB (Free Size: ~400,00 GB)

In Fedora it looks that disks and RAID are recognised OK.

$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sdb 8:16 0 2,7T 0 disk
├─sdb4 8:20 0 517,7M 0 part
│ └─md123 9:123 0 448,1M 0 raid1
├─sdb2 8:18 0 517,7M 0 part
│ └─md127 9:127 0 517,7M 0 raid1
├─sdb5 8:21 0 8G 0 part
│ └─md125 9:125 0 6,9G 0 raid1
├─sdb3 8:19 0 2,7T 0 part
│ └─md124 9:124 0 5,5T 0 raid5
│ └─vg1-lv544 253:3 0 20G 0 lvm
└─sdb1 8:17 0 517,7M 0 part
└─md126 9:126 0 517,7M 0 raid1
sdc 8:32 0 2,7T 0 disk
├─sdc2 8:34 0 517,7M 0 part
│ └─md127 9:127 0 517,7M 0 raid1
├─sdc5 8:37 0 8G 0 part
│ └─md125 9:125 0 6,9G 0 raid1
├─sdc3 8:35 0 2,7T 0 part
│ └─md124 9:124 0 5,5T 0 raid5
│ └─vg1-lv544 253:3 0 20G 0 lvm
├─sdc1 8:33 0 517,7M 0 part
│ └─md126 9:126 0 517,7M 0 raid1
└─sdc4 8:36 0 517,7M 0 part
└─md123 9:123 0 448,1M 0 raid1
sda 8:0 0 2,7T 0 disk
├─sda4 8:4 0 517,7M 0 part
│ └─md123 9:123 0 448,1M 0 raid1
├─sda2 8:2 0 517,7M 0 part
│ └─md127 9:127 0 517,7M 0 raid1
├─sda5 8:5 0 8G 0 part
│ └─md125 9:125 0 6,9G 0 raid1
├─sda3 8:3 0 2,7T 0 part
│ └─md124 9:124 0 5,5T 0 raid5
│ └─vg1-lv544 253:3 0 20G 0 lvm
└─sda1 8:1 0 517,7M 0 part
└─md126 9:126 0 517,7M 0 raid1

$ cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md123 : active (auto-read-only) raid1 sda4[2] sdc4[1] sdb4[0]
458880 blocks super 1.0 [24/3] [UUU_____________________]
bitmap: 1/1 pages [4KB], 65536KB chunk

md124 : active (auto-read-only) raid5 sdb3[0] sdc3[1] sda3[2]
5840623232 blocks super 1.0 level 5, 64k chunk, algorithm 2 [3/3]
[UUU]

md125 : active (auto-read-only) raid1 sda5[2] sdc5[1] sdb5[0]
7168000 blocks super 1.0 [3/3] [UUU]
bitmap: 0/1 pages [0KB], 65536KB chunk

md126 : active (auto-read-only) raid1 sda1[2] sdc1[1] sdb1[0]
530112 blocks super 1.0 [24/3] [UUU_____________________]
bitmap: 1/1 pages [4KB], 65536KB chunk

md127 : active raid1 sda2[2](S) sdb2[0] sdc2[1]
530112 blocks super 1.0 [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk

$ sudo pvs
PV VG Fmt Attr PSize PFree
/dev/md124 vg1 lvm2 a-- 5,44t 0

$ sudo vgs
VG #PV #LV #SN Attr VSize VFree
vg1 1 4 0 wz--n- 5,44t 0

$ sudo lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log
Cpy%Sync Convert
lv1 vg1 Vwi---tz-- 5,42t tp1
lv2 vg1 Vwi---tz-- 512,00g tp1
lv544 vg1 -wi-a----- 20,00g
tp1 vg1 twi---tz-- 5,40t

$ sudo lvscan
ACTIVE '/dev/vg1/lv544' [20,00 GiB] inherit
inactive '/dev/vg1/tp1' [5,40 TiB] inherit
inactive '/dev/vg1/lv1' [5,42 TiB] inherit
inactive '/dev/vg1/lv2' [512,00 GiB] inherit

If I understand correctly:
/dev/vg1/tp1 - it is QNAP "Storage Pool 1" (with real size 5,40 TiB)
/dev/vg1/lv1 - it is QNAP "DataVol1" (with virtual size 5,42 TiB)
/dev/vg1/lv2 - it is QNAP "DataVol2" (with virtual size 512 GiB)

So my data are on /dev/vg1/lv1 and /dev/vg1/lv2 but they are "inactive",
so they cannot be mounted.
$ sudo mount /dev/vg1/lv2 /mnt/lv2
mount: special device /dev/vg1/lv2 does not exist

I tried to activate it with lvchange but there is message that manual
repair of vg1/tp1 is required:
$ sudo lvchange -ay vg1/lv2
Check of pool vg1/tp1 failed (status:1). Manual repair required!

I tried command lvconvert --repair vg1/tp1 with no success:
$ sudo lvconvert --repair vg1/tp1
Using default stripesize 64,00 KiB.
Volume group "vg1" has insufficient free space (0 extents): 4096
required.

Please help howto repair/mount this LVM...

Best Regards,
Daniel
Marian Csontos
2017-06-12 13:42:10 UTC
Permalink
Post by Daniel Łaskowski
Hello,
I'm trying mount three disks from QNAP TS-469L in Linux (Fedora 25).
System correctly recognised disks and RAID5, but I have problem with
activate LVM with data.
...
Post by Daniel Łaskowski
$ sudo vgs
VG #PV #LV #SN Attr VSize VFree
vg1 1 4 0 wz--n- 5,44t 0
VFree 0 - that's bad...
Post by Daniel Łaskowski
I tried to activate it with lvchange but there is message that manual
$ sudo lvchange -ay vg1/lv2
Check of pool vg1/tp1 failed (status:1). Manual repair required!
$ sudo lvconvert --repair vg1/tp1
Using default stripesize 64,00 KiB.
Volume group "vg1" has insufficient free space (0 extents): 4096
required.
Repair needs some space to write new metadata, and the message says it
all: there is no free space in the volume group.

Add more space to the volume group: vgextend vg1 DEVICE. You will either
need to add a disk, or carve out some space out of other MD devices.

Or free some space in the VG: you can not shrink thin pool, so the only
other option is, if the data on vg1/lv544 are not interesting or can be
moved elsewhere, you could remove that and let repair use that space.

Also I noticed the size of logical volume lv1 alone is 5,42t (and with
lv2 it is approximately 5,9t) while thin pool's is only 5,40t: that is
not a good setup - it will eventually overfill (maybe it already did)
and you will not be able to resize the pool any further to accommodate
all data - with older kernels this was a serious problem and could lead
to a file system corruption.

If that's what the NAS created for you, it should be reported to the
manufacturer as well.

Once you can mount the volumes, I strongly recommend shrinking lv1.

-- Martian
Post by Daniel Łaskowski
Please help howto repair/mount this LVM...
Best Regards,
Daniel
_______________________________________________
linux-lvm mailing list
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
Danniello
2017-06-12 15:16:51 UTC
Permalink
Post by Marian Csontos
Post by Daniel Łaskowski
$ sudo vgs
VG #PV #LV #SN Attr VSize VFree
vg1 1 4 0 wz--n- 5,44t 0
VFree 0 - that's bad...
Post by Daniel Łaskowski
I tried to activate it with lvchange but there is message that manual
$ sudo lvchange -ay vg1/lv2
Check of pool vg1/tp1 failed (status:1). Manual repair required!
$ sudo lvconvert --repair vg1/tp1
Using default stripesize 64,00 KiB.
Volume group "vg1" has insufficient free space (0 extents): 4096
required.
Repair needs some space to write new metadata, and the message says it
all: there is no free space in the volume group.
Add more space to the volume group: vgextend vg1 DEVICE. You will
either need to add a disk, or carve out some space out of other MD
devices.
Or free some space in the VG: you can not shrink thin pool, so the
only other option is, if the data on vg1/lv544 are not interesting or
can be moved elsewhere, you could remove that and let repair use that
space.
Also I noticed the size of logical volume lv1 alone is 5,42t (and with
lv2 it is approximately 5,9t) while thin pool's is only 5,40t: that is
not a good setup - it will eventually overfill (maybe it already did)
and you will not be able to resize the pool any further to accommodate
all data - with older kernels this was a serious problem and could
lead to a file system corruption.
If that's what the NAS created for you, it should be reported to the
manufacturer as well.
Once you can mount the volumes, I strongly recommend shrinking lv1.
This setup was done by me on QNAP QTS - at beginning I had only
DataVol1, but after some time I added "small" DataVol2. It is not good
configuration, but in QTS system there was no option to shrink DataVol1.
Anyway it was working OK in QNAP QTS system (but with warnings that pool
is almost full). Unfortunately my QNAP is no longer working -
motherboard defected...

I have backup of all important data on other places, but wanted to try
restore also "not important data":)

In QNAP QTS system it should work without additional actions - only
after unlock encrypted DataVol1 and DataVol2 there should be message
about check filesystem. I do not have access to other working QNAP, so I
tried with my desktop system with Fedora, but activate lv1 and lv2 was
not working on default configuration.

Probably you are right - I should add storage and then try to repair it.
But I found "workaround" by modifying /etc/lvm/lvm.conf in global section:
thin_check_executable = ""

I know that disable thin_check is generally not recommended, but I
wanted try everything before do more drastic methods. After this change
I could activate volumes lv1 and lv2:
lvchange -ay /dev/vg1

Decrypt:
cryptsetup luksOpen /dev/vg1/lv1 crypt_lv1
cryptsetup luksOpen /dev/vg1/lv2 crypt_lv2

And mount:
mount -r /dev/mapper/crypt_lv1 /mnt/lv1
mount -r /dev/mapper/crypt_lv2 /mnt/lv2
Data accessible:)

I want only copy some data from it, so repair will not be necessary.
Next I plan build my own NAS, so I will reformat disks from scratch.

Thank you!
Daniel

Loading...