Discussion:
[linux-lvm] Move LV with GFS to new LUN (pvmove) in the cluster
kAja Ziegler
2018-05-23 12:31:32 UTC
Permalink
Hi all,

I want to ask if it is possible and safe to move online the clustered LV
with GFS on the one PV (multipathed LUN from the old storage) to an other
one (multipathed LUN on the new storage)?

I found these articles in Red Hat knowledgebase:

- Can I perform a pvmove on a clustered logical volume? -
https://access.redhat.com/solutions/39894
- How to migrate SAN LUNs which has Clustered LVM configured on it? -
https://access.redhat.com/solutions/466533

With regard to the mentioned articles it can be done, it is only needed to
install and run the cmirror service. Should I expect any problems or other
prerequisites?


My clustered environment:

- 8 nodes - CentOS 6.9
- LVM version: 2.02.143(2)-RHEL6 (2016-12-13)
Library version: 1.02.117-RHEL6 (2016-12-13)
Driver version: 4.33.1
- 7 clustered VGs overall
- 1 LV with GFS mounted on all nodes


- 1 clustered VG with 1 PV and 1 LV on which it is GFS:

[***@...]# pvdisplay /dev/mapper/35001b4d01b1da512
--- Physical volume ---
PV Name /dev/mapper/35001b4d01b1da512
VG Name vg_1
PV Size 4.55 TiB / not usable 2.00 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 1192092
Free PE 1115292
Allocated PE 76800
PV UUID jH1ubM-ElJv-632D-NG8x-jzgJ-mwtA-pxxL90

[***@...]# lvdisplay vg_1/lv_gfs
--- Logical volume ---
LV Path /dev/vg_1/lv_gfs
LV Name lv_gfs
VG Name vg_1
LV UUID OsJ8hM-sH9k-KNs1-B1UD-3qe2-6vja-hLsrYY
LV Write Access read/write
LV Creation host, time ,
LV Status available
# open 1
LV Size 300.00 GiB
Current LE 76800
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:418

[***@...]# vgdisplay vg_1
--- Volume group ---
VG Name vg_1
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 3898
VG Access read/write
VG Status resizable
Clustered yes
Shared no
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 1
Act PV 1
VG Size 4.55 TiB
PE Size 4.00 MiB
Total PE 1192092
Alloc PE / Size 76800 / 300.00 GiB
Free PE / Size 1115292 / 4.25 TiB
VG UUID PtMo7F-XIbC-YSA0-rCQQ-R1oE-g8B7-PiAeIR


- IO activity on the PV (LUN) is very low - from iostat and average per
node: 2.5 tps , 20.03 Blk_read/s and 0 Blk_wrtn/s in 1 minute.


Thank you for your opinions and experience.

Have a great day and with best regards,
--
Karel Ziegler
emmanuel segura
2018-05-24 08:13:47 UTC
Permalink
I used this procedure to archive what you need to do.

1: active cmirror on every cluster nodes
2: lvconvert -m 1 vg00/lvdata /dev/mapper/mapth1 --corelog #where mpath1 is
the new lun

When the lvdata lv is in sync, now you can the dettach the old lun with

lvconvert -m 0 vg00/lvdata /dev/mapper/mapth0
Post by kAja Ziegler
Hi all,
I want to ask if it is possible and safe to move online the clustered LV
with GFS on the one PV (multipathed LUN from the old storage) to an other
one (multipathed LUN on the new storage)?
- Can I perform a pvmove on a clustered logical volume? -
https://access.redhat.com/solutions/39894
- How to migrate SAN LUNs which has Clustered LVM configured on it? -
https://access.redhat.com/solutions/466533
With regard to the mentioned articles it can be done, it is only needed
to install and run the cmirror service. Should I expect any problems or
other prerequisites?
- 8 nodes - CentOS 6.9
- LVM version: 2.02.143(2)-RHEL6 (2016-12-13)
Library version: 1.02.117-RHEL6 (2016-12-13)
Driver version: 4.33.1
- 7 clustered VGs overall
- 1 LV with GFS mounted on all nodes
--- Physical volume ---
PV Name /dev/mapper/35001b4d01b1da512
VG Name vg_1
PV Size 4.55 TiB / not usable 2.00 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 1192092
Free PE 1115292
Allocated PE 76800
PV UUID jH1ubM-ElJv-632D-NG8x-jzgJ-mwtA-pxxL90
--- Logical volume ---
LV Path /dev/vg_1/lv_gfs
LV Name lv_gfs
VG Name vg_1
LV UUID OsJ8hM-sH9k-KNs1-B1UD-3qe2-6vja-hLsrYY
LV Write Access read/write
LV Creation host, time ,
LV Status available
# open 1
LV Size 300.00 GiB
Current LE 76800
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:418
--- Volume group ---
VG Name vg_1
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 3898
VG Access read/write
VG Status resizable
Clustered yes
Shared no
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 1
Act PV 1
VG Size 4.55 TiB
PE Size 4.00 MiB
Total PE 1192092
Alloc PE / Size 76800 / 300.00 GiB
Free PE / Size 1115292 / 4.25 TiB
VG UUID PtMo7F-XIbC-YSA0-rCQQ-R1oE-g8B7-PiAeIR
- IO activity on the PV (LUN) is very low - from iostat and average per
node: 2.5 tps , 20.03 Blk_read/s and 0 Blk_wrtn/s in 1 minute.
Thank you for your opinions and experience.
Have a great day and with best regards,
--
Karel Ziegler
_______________________________________________
linux-lvm mailing list
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
--
.~.
/V\
// \\
/( )\
^`~'^
kAja Ziegler
2018-05-29 07:55:29 UTC
Permalink
Post by emmanuel segura
I used this procedure to archive what you need to do.
1: active cmirror on every cluster nodes
2: lvconvert -m 1 vg00/lvdata /dev/mapper/mapth1 --corelog #where mpath1
is the new lun
When the lvdata lv is in sync, now you can the dettach the old lun with
lvconvert -m 0 vg00/lvdata /dev/mapper/mapth0
Post by kAja Ziegler
Hi all,
I want to ask if it is possible and safe to move online the clustered LV
with GFS on the one PV (multipathed LUN from the old storage) to an other
one (multipathed LUN on the new storage)?
- Can I perform a pvmove on a clustered logical volume? -
https://access.redhat.com/solutions/39894
- How to migrate SAN LUNs which has Clustered LVM configured on it? -
https://access.redhat.com/solutions/466533
With regard to the mentioned articles it can be done, it is only needed
to install and run the cmirror service. Should I expect any problems or
other prerequisites?
- 8 nodes - CentOS 6.9
- LVM version: 2.02.143(2)-RHEL6 (2016-12-13)
Library version: 1.02.117-RHEL6 (2016-12-13)
Driver version: 4.33.1
- 7 clustered VGs overall
- 1 LV with GFS mounted on all nodes
--- Physical volume ---
PV Name /dev/mapper/35001b4d01b1da512
VG Name vg_1
PV Size 4.55 TiB / not usable 2.00 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 1192092
Free PE 1115292
Allocated PE 76800
PV UUID jH1ubM-ElJv-632D-NG8x-jzgJ-mwtA-pxxL90
--- Logical volume ---
LV Path /dev/vg_1/lv_gfs
LV Name lv_gfs
VG Name vg_1
LV UUID OsJ8hM-sH9k-KNs1-B1UD-3qe2-6vja-hLsrYY
LV Write Access read/write
LV Creation host, time ,
LV Status available
# open 1
LV Size 300.00 GiB
Current LE 76800
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:418
--- Volume group ---
VG Name vg_1
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 3898
VG Access read/write
VG Status resizable
Clustered yes
Shared no
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 1
Act PV 1
VG Size 4.55 TiB
PE Size 4.00 MiB
Total PE 1192092
Alloc PE / Size 76800 / 300.00 GiB
Free PE / Size 1115292 / 4.25 TiB
VG UUID PtMo7F-XIbC-YSA0-rCQQ-R1oE-g8B7-PiAeIR
- IO activity on the PV (LUN) is very low - from iostat and average per
node: 2.5 tps , 20.03 Blk_read/s and 0 Blk_wrtn/s in 1 minute.
Thank you for your opinions and experience.
Have a great day and with best regards,
--
Karel Ziegler
_______________________________________________
linux-lvm mailing list
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
--
.~.
/V\
// \\
/( )\
^`~'^
_______________________________________________
linux-lvm mailing list
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
Hi Emmanuel and the others,

so it is better to perform lvconvert or pvmove (if it is supported) on a
clustered logical volume?

With best regards,
--
Karel Ziegler
Loading...