Discussion:
[linux-lvm] Add a disk from remote node to LVM
Mahmood Naderan
2014-05-07 07:33:07 UTC
Permalink
Hello,
Is it possible to add a network drive to an existing LVM? I have created a group on the main node (N1) and have added three local drives. Now I want to add a remote disk from another node (N2). In other words, I want to add N2:/dev/sdb to N1:/dev/tigerfiler/tigervolume


Is that possible? How? All examples I see are trying to add extra local drives and not remote drives.


Here are some info


# vgdisplay
  --- Volume group ---
  VG Name               tigerfiler1
  System ID

Format                lvm2
  Metadata Areas        3
  Metadata Sequence No  2
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               1
  Max
PV                0
  Cur PV                3
  Act PV                3
  VG Size               2.73 TiB
  PE Size               4.00 MiB
  Total PE              715401
  Alloc PE / Size       715401 / 2.73 TiB
  Free  PE / Size       0 / 0
  VG
UUID               8Ef8Vj-bDc7-H4ia-D3X4-cDpY-kE9Z-njc8lj



pvdisplay
  --- Physical volume ---
  PV Name               /dev/sdb
  VG Name               tigerfiler1
  PV Size               931.51 GiB / not usable 1.71 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB

Total PE              238467
  Free PE               0
  Allocated PE          238467
  PV UUID               FmC77z-9UaR-FhYa-ONHZ-EazF-5Hm2-8zmUuj

  --- Physical volume ---
  PV Name               /dev/sdc
  VG Name               tigerfiler1
  PV Size               931.51 GiB / not usable 1.71 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              238467
  Free PE               0
  Allocated PE          238467
  PV UUID               1jBQUn-gkkD-37I3-R3nL-KeHA-Hn2A-4zgNcR

  --- Physical volume ---
  PV Name               /dev/sdd
  VG Name               tigerfiler1
  PV Size               931.51 GiB / not usable 1.71 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              238467
  Free PE               0
  Allocated PE          238467
  PV
UUID               mxi8jW-O868-iPse-IfY7-ag3m-R3vZ-gS3Jdx



Regards,
Mahmood    
Peter Rajnoha
2014-05-07 09:40:59 UTC
Permalink
Post by Mahmood Naderan
Hello,
Is it possible to add a network drive to an existing LVM? I have created a group on the main node (N1) and have added three local drives. Now I want to add a remote disk from another node (N2). In other words, I want to add N2:/dev/sdb to N1:/dev/tigerfiler/tigervolume
Is that possible? How? All examples I see are trying to add extra local drives and not remote drives.
How do you mean "remote drive" exactly? What type is it - is it iscsi, nbd...?
Or is it just a standard local drive attached to machine N2 you need to access on machine N1?
Is the drive shared? Or you have exclusive access to that drive for single machine?

In case the drive is shared amongst several nodes, you need to consider using clustered LVM (clvmd).

But essentially, you need to connect the remote drive first and then do a pvcreate and vgextend.
--
Peter
Mahmood Naderan
2014-05-07 10:04:32 UTC
Permalink
Post by Peter Rajnoha
How do you mean "remote drive" exactly? What type is it - is it iscsi, nbd...?
There are two nodes in a single rack. Each has 4 slots for SATAII disks. Current configuration is


N1: /dev/sda, /dev/sdb, /dev/sdc, /dev/sdd where the last three (b, c, d) are grouped in in a single LVM volume.


N2: /dev/sda, /dev/sdb where the last one (b) is free and I want to add it to the LVM of N1.

Each node is running an independent operating system (scientific linux)Currently /dev/sdb on N2 is not shared. I have just format it as ext4 (which is the same format as the LVM of N1).
Post by Peter Rajnoha
But essentially, you need to connect the remote drive first and then do a pvcreate and vgextend.
Should I run these commands on N1 or N2?

Regards,
Mahmood
Peter Rajnoha
2014-05-07 12:20:12 UTC
Permalink
Post by Mahmood Naderan
Post by Peter Rajnoha
How do you mean "remote drive" exactly? What type is it - is it iscsi, nbd...?
There are two nodes in a single rack. Each has 4 slots for SATAII disks. Current configuration is
N1: /dev/sda, /dev/sdb, /dev/sdc, /dev/sdd where the last three (b, c, d) are grouped in in a single LVM volume.
N2: /dev/sda, /dev/sdb where the last one (b) is free and I want to add it to the LVM of N1.
Each node is running an independent operating system (scientific linux)Currently /dev/sdb on N2 is not shared. I have just format it as ext4 (which is the same format as the LVM of N1).
One option here is to export the sdb on N2 as iscsi target and then connect
it to N1 node. But you must be very careful that nothing touches the sdb on N2
(mainly not writing anything to the drive or not trying to activate anything that
can be found on the drive), including LVM itself. As for LVM, you should configure
global_filter (or filter if you have older version of LVM) in /etc/lvm/lvm.conf
to exclude sdb on LVM scans (also, be cautious that under some circumnstances
"sdb" is not stable name and kernel can assign a different name on next reboot).

For example you can have a look at these howtos - there are slight differences
amongst distributions, mainly the package names, but the logic stays the same:

http://www.howtoforge.com/using-iscsi-on-fedora-10-initiator-and-target
http://www.howtoforge.com/using-iscsi-on-debian-squeeze-initiator-and-target
http://www.howtoforge.com/using-iscsi-on-ubuntu-10.04-initiator-and-target

But iscsi is just one option, there are more options how to export a drive
and attach it on remote node... You can also check for "nbd" (network block device),
that's another way that comes to my mind at the moment.

Though the best in your case would be if you could just attach the drive
physically to the machine where needed, if possible, of course.
Post by Mahmood Naderan
Post by Peter Rajnoha
But essentially, you need to connect the remote drive first and then do a pvcreate and vgextend.
Should I run these commands on N1 or N2?
Once you have the remote drive attached (let's say it ends up attached as sdx on N1),
just use:

pvcreate /dev/sdx
vgextend vg /dev/sdx

(replace sdx and vg with actualy drive and vg name)

And you're done.
--
Peter
Mahmood Naderan
2014-05-07 14:53:16 UTC
Permalink
One option here is to export the sdb on N2 as iscsi target and then connect it to N1 node.
Seems to be good. I will try that


 
Regards,
Mahmood
Post by Peter Rajnoha
How do you mean "remote drive" exactly? What type is it - is it iscsi, nbd...?
There are two nodes in a single rack. Each has 4 slots for SATAII disks. Current configuration is
N1: /dev/sda, /dev/sdb, /dev/sdc, /dev/sdd where the last three (b, c, d) are grouped in in a single LVM volume.
N2: /dev/sda, /dev/sdb where the last one (b) is free and I want to add it to the LVM of N1.
Each node is running an independent operating system (scientific linux)Currently /dev/sdb on N2 is not shared. I have just format it as ext4 (which is the same format as the LVM of N1).
One option here is to export the sdb on N2 as iscsi target and then connect
it to N1 node. But you must be very careful that nothing touches the sdb on N2
(mainly not writing anything to the drive or not trying to activate anything that
can be found on the drive), including LVM itself. As for LVM, you should configure
global_filter (or filter if you have older version of LVM) in /etc/lvm/lvm.conf
to exclude sdb on LVM scans (also, be cautious that under some circumnstances
"sdb" is not stable name and kernel can assign a different name on next reboot).

For example you can have a look at these howtos - there are slight differences
amongst distributions, mainly the package names, but the logic stays the same:

http://www.howtoforge.com/using-iscsi-on-fedora-10-initiator-and-target
http://www.howtoforge.com/using-iscsi-on-debian-squeeze-initiator-and-target
http://www.howtoforge.com/using-iscsi-on-ubuntu-10.04-initiator-and-target

But iscsi is just one option, there are more options how to export a drive
and attach it on remote node... You can also check for "nbd" (network block device),
that's another way that comes to my mind at the moment.

Though the best in your case would be if you could just attach the drive
physically to the machine where needed, if possible, of course.
Post by Peter Rajnoha
But essentially, you need to connect the remote drive first and then do a pvcreate and vgextend.
Should I run these commands on N1 or N2?
Once you have the remote drive attached (let's say it ends up attached as sdx on N1),
just use:

pvcreate /dev/sdx
vgextend vg /dev/sdx

(replace sdx and vg with actualy drive and vg name)

And you're done.
--
Peter
Marian Csontos
2014-05-07 13:00:12 UTC
Permalink
Post by Mahmood Naderan
Post by Peter Rajnoha
How do you mean "remote drive" exactly? What type is it - is it iscsi, nbd...?
There are two nodes in a single rack. Each has 4 slots for SATAII disks. Current configuration is
N1: /dev/sda, /dev/sdb, /dev/sdc, /dev/sdd where the last three (b, c, d) are grouped in in a single LVM volume.
N2: /dev/sda, /dev/sdb where the last one (b) is free and I want to add it to the LVM of N1.
Each node is running an independent operating system (scientific linux)Currently /dev/sdb on N2 is not shared.
I have just format it as ext4 (which is the same format as the LVM of
N1).

In any case keep in mind node N2 would need to be up and running all the
time or you risk rendering N1 inoperable while N2 has outage or is
broken - depending on which data would be stored at N2.

I suggest either:

- using RAID[156] to make the remote drive (temporary) dispensable.

- making the PV non-allocatable by default and enable it only when
allocating LVs manually for data you do not need all the time - for
example you do not want your root filesystem corrupted while unavailable
but that may be acceptable for backup copies (but NFS may be sufficient
solution for that.)

Another option would be to use distributed filesystem like Gluster which
was designed to keep data spread over bunch of nodes while ensuring
redundancy but I understand it was designed to work with larger number
of nodes and I am not sure how efficient that would be on 2 nodes as you
are describing.

-- Martian
Post by Mahmood Naderan
Post by Peter Rajnoha
But essentially, you need to connect the remote drive first and then do a pvcreate and vgextend.
Should I run these commands on N1 or N2?
Regards,
Mahmood
_______________________________________________
linux-lvm mailing list
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
Loading...