Discussion:
[linux-lvm] raid & its stripes
lejeczek
2017-09-13 13:53:58 UTC
Permalink
hi boys, girls

man page reads: -i ...This is equal to the number of
physical volumes to scatter the  logical  volume data....
I wonder, when I do not use -i while creating an LV with 10
phy devs.

$ lvcreate -n raid0.A --type raid0 -I 16 -l 97%pv

a dbench would show:
$ dbench -t 60 20
...
Throughput 112.309 MB/sec  20 clients  20 procs 
max_latency=719.409 ms

Yet when I say: this many stripes:

$ lvcreate -n raid0.A --type raid0 -I 16 -i 10 -l 97%pv

dbench:
...
Throughput 83.2822 MB/sec  20 clients  20 procs 
max_latency=816.027 ms

And though the results would vary, xfs, a dbench for LV with
no -i as an argument(which LVM chooses then to be 2) would
always look better.
And I thought, as in the manual, always make stripes to go
to all phy devices.

Question - is there some "little" magic LVM does? And if yes
then how/what it is?
many thanks, L.

.
Brassow Jonathan
2017-09-14 14:58:11 UTC
Permalink
Seems strange on the surface. Would you mind posting the layout of each? ‘lvs -a -o +devices’

brassow
Post by lejeczek
hi boys, girls
man page reads: -i ...This is equal to the number of physical volumes to scatter the logical volume data....
I wonder, when I do not use -i while creating an LV with 10 phy devs.
$ lvcreate -n raid0.A --type raid0 -I 16 -l 97%pv
$ dbench -t 60 20
...
Throughput 112.309 MB/sec 20 clients 20 procs max_latency=719.409 ms
$ lvcreate -n raid0.A --type raid0 -I 16 -i 10 -l 97%pv
...
Throughput 83.2822 MB/sec 20 clients 20 procs max_latency=816.027 ms
And though the results would vary, xfs, a dbench for LV with no -i as an argument(which LVM chooses then to be 2) would always look better.
And I thought, as in the manual, always make stripes to go to all phy devices.
Question - is there some "little" magic LVM does? And if yes then how/what it is?
many thanks, L.
.
_______________________________________________
linux-lvm mailing list
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
lejeczek
2017-09-14 15:49:16 UTC
Permalink
Post by Brassow Jonathan
Seems strange on the surface. Would you mind posting the layout of each? ‘lvs -a -o +devices’
brassow
here is for LV created without -i, both times with & without
I supplied all ten(all that VG has) pvs as arguments to
lvcreate.

$ lvs -a -o +devices,stripes,stripe_size chenbro0.1
  LV                 VG         Attr       LSize  Pool
Origin Data% Meta%  Move Log Cpy%Sync Convert
Devices                                 #Str Stripe
  raid0.A            chenbro0.1 rwi-aor--- 21.18t
raid0.A_rimage_0(0),raid0.A_rimage_1(0)    2 16.00k
  [raid0.A_rimage_0] chenbro0.1 iwi-aor--- 10.59t
/dev/sdak(0)                               1     0
  [raid0.A_rimage_0] chenbro0.1 iwi-aor--- 10.59t
/dev/sdam(0)                               1     0
  [raid0.A_rimage_0] chenbro0.1 iwi-aor--- 10.59t
/dev/sdao(0)                               1     0
  [raid0.A_rimage_0] chenbro0.1 iwi-aor--- 10.59t
/dev/sdaq(0)                               1     0
  [raid0.A_rimage_0] chenbro0.1 iwi-aor--- 10.59t
/dev/sdas(0)                               1     0
  [raid0.A_rimage_0] chenbro0.1 iwi-aor--- 10.59t
/dev/sdau(0)                               1     0
  [raid0.A_rimage_1] chenbro0.1 iwi-aor--- 10.59t
/dev/sdal(0)                               1     0
  [raid0.A_rimage_1] chenbro0.1 iwi-aor--- 10.59t
/dev/sdan(0)                               1     0
  [raid0.A_rimage_1] chenbro0.1 iwi-aor--- 10.59t
/dev/sdap(0)                               1     0
  [raid0.A_rimage_1] chenbro0.1 iwi-aor--- 10.59t
/dev/sdar(0)                               1     0
  [raid0.A_rimage_1] chenbro0.1 iwi-aor--- 10.59t
/dev/sdat(0)                               1     0
  [raid0.A_rimage_1] chenbro0.1 iwi-aor--- 10.59t
/dev/sdav(0)                               1     0

I cannot remove this LV for a while thus will not be able to
recreate with -i for now, sorry.
Post by Brassow Jonathan
Post by lejeczek
hi boys, girls
man page reads: -i ...This is equal to the number of physical volumes to scatter the logical volume data....
I wonder, when I do not use -i while creating an LV with 10 phy devs.
$ lvcreate -n raid0.A --type raid0 -I 16 -l 97%pv
$ dbench -t 60 20
...
Throughput 112.309 MB/sec 20 clients 20 procs max_latency=719.409 ms
$ lvcreate -n raid0.A --type raid0 -I 16 -i 10 -l 97%pv
...
Throughput 83.2822 MB/sec 20 clients 20 procs max_latency=816.027 ms
And though the results would vary, xfs, a dbench for LV with no -i as an argument(which LVM chooses then to be 2) would always look better.
And I thought, as in the manual, always make stripes to go to all phy devices.
Question - is there some "little" magic LVM does? And if yes then how/what it is?
many thanks, L.
.
_______________________________________________
linux-lvm mailing list
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
_______________________________________________
linux-lvm mailing list
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
.
Brassow Jonathan
2017-09-15 02:20:10 UTC
Permalink
There is definitely a difference here. You have 2 stripes with 5 devices in each stripe. If you were writing sequentially, you’d be bouncing between the first 2 devices until they are full, then the next 2, and so on.

When using the -i argument, you are creating 10 stripes. Writing sequentially causes the writes to go from one device to the next until all are written and then starts back at the first. This is a very different pattern.

I think the result of any benchmark on these two very different layouts would be significantly different.

brassow

BTW, I swear at one point that if you did not provide the ‘-i’ it would use all of the devices as a stripe, such that your two examples would result in the same thing. I could be wrong though.
Post by Brassow Jonathan
Seems strange on the surface. Would you mind posting the layout of each? ‘lvs -a -o +devices’
brassow
here is for LV created without -i, both times with & without I supplied all ten(all that VG has) pvs as arguments to lvcreate.
$ lvs -a -o +devices,stripes,stripe_size chenbro0.1
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Devices #Str Stripe
raid0.A chenbro0.1 rwi-aor--- 21.18t raid0.A_rimage_0(0),raid0.A_rimage_1(0) 2 16.00k
[raid0.A_rimage_0] chenbro0.1 iwi-aor--- 10.59t /dev/sdak(0) 1 0
[raid0.A_rimage_0] chenbro0.1 iwi-aor--- 10.59t /dev/sdam(0) 1 0
[raid0.A_rimage_0] chenbro0.1 iwi-aor--- 10.59t /dev/sdao(0) 1 0
[raid0.A_rimage_0] chenbro0.1 iwi-aor--- 10.59t /dev/sdaq(0) 1 0
[raid0.A_rimage_0] chenbro0.1 iwi-aor--- 10.59t /dev/sdas(0) 1 0
[raid0.A_rimage_0] chenbro0.1 iwi-aor--- 10.59t /dev/sdau(0) 1 0
[raid0.A_rimage_1] chenbro0.1 iwi-aor--- 10.59t /dev/sdal(0) 1 0
[raid0.A_rimage_1] chenbro0.1 iwi-aor--- 10.59t /dev/sdan(0) 1 0
[raid0.A_rimage_1] chenbro0.1 iwi-aor--- 10.59t /dev/sdap(0) 1 0
[raid0.A_rimage_1] chenbro0.1 iwi-aor--- 10.59t /dev/sdar(0) 1 0
[raid0.A_rimage_1] chenbro0.1 iwi-aor--- 10.59t /dev/sdat(0) 1 0
[raid0.A_rimage_1] chenbro0.1 iwi-aor--- 10.59t /dev/sdav(0) 1 0
I cannot remove this LV for a while thus will not be able to recreate with -i for now, sorry.
Post by Brassow Jonathan
Post by lejeczek
hi boys, girls
man page reads: -i ...This is equal to the number of physical volumes to scatter the logical volume data....
I wonder, when I do not use -i while creating an LV with 10 phy devs.
$ lvcreate -n raid0.A --type raid0 -I 16 -l 97%pv
$ dbench -t 60 20
...
Throughput 112.309 MB/sec 20 clients 20 procs max_latency=719.409 ms
$ lvcreate -n raid0.A --type raid0 -I 16 -i 10 -l 97%pv
...
Throughput 83.2822 MB/sec 20 clients 20 procs max_latency=816.027 ms
And though the results would vary, xfs, a dbench for LV with no -i as an argument(which LVM chooses then to be 2) would always look better.
And I thought, as in the manual, always make stripes to go to all phy devices.
Question - is there some "little" magic LVM does? And if yes then how/what it is?
many thanks, L.
.
_______________________________________________
linux-lvm mailing list
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
_______________________________________________
linux-lvm mailing list
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
.
_______________________________________________
linux-lvm mailing list
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
lejeczek
2017-09-15 11:59:45 UTC
Permalink
Post by Brassow Jonathan
There is definitely a difference here. You have 2 stripes with 5 devices in each stripe. If you were writing sequentially, you’d be bouncing between the first 2 devices until they are full, then the next 2, and so on.
When using the -i argument, you are creating 10 stripes. Writing sequentially causes the writes to go from one device to the next until all are written and then starts back at the first. This is a very different pattern.
I think the result of any benchmark on these two very different layouts would be significantly different.
brassow
BTW, I swear at one point that if you did not provide the ‘-i’ it would use all of the devices as a stripe, such that your two examples would result in the same thing. I could be wrong though.
that's what I thought I remembered too.
I guess a big question, from user/admin perspective is: are
those two stripes LVM decides on(when no -i) is the best
possible choice LVM makes after some elaborative
determination so the number of stripes(no -i) would, might
vary depending on raid type, phy devices number and maybe
some other factors or, 2 stripes are simply hard-coded defaults?
Post by Brassow Jonathan
Post by Brassow Jonathan
Seems strange on the surface. Would you mind posting the layout of each? ‘lvs -a -o +devices’
brassow
here is for LV created without -i, both times with & without I supplied all ten(all that VG has) pvs as arguments to lvcreate.
$ lvs -a -o +devices,stripes,stripe_size chenbro0.1
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Devices #Str Stripe
raid0.A chenbro0.1 rwi-aor--- 21.18t raid0.A_rimage_0(0),raid0.A_rimage_1(0) 2 16.00k
[raid0.A_rimage_0] chenbro0.1 iwi-aor--- 10.59t /dev/sdak(0) 1 0
[raid0.A_rimage_0] chenbro0.1 iwi-aor--- 10.59t /dev/sdam(0) 1 0
[raid0.A_rimage_0] chenbro0.1 iwi-aor--- 10.59t /dev/sdao(0) 1 0
[raid0.A_rimage_0] chenbro0.1 iwi-aor--- 10.59t /dev/sdaq(0) 1 0
[raid0.A_rimage_0] chenbro0.1 iwi-aor--- 10.59t /dev/sdas(0) 1 0
[raid0.A_rimage_0] chenbro0.1 iwi-aor--- 10.59t /dev/sdau(0) 1 0
[raid0.A_rimage_1] chenbro0.1 iwi-aor--- 10.59t /dev/sdal(0) 1 0
[raid0.A_rimage_1] chenbro0.1 iwi-aor--- 10.59t /dev/sdan(0) 1 0
[raid0.A_rimage_1] chenbro0.1 iwi-aor--- 10.59t /dev/sdap(0) 1 0
[raid0.A_rimage_1] chenbro0.1 iwi-aor--- 10.59t /dev/sdar(0) 1 0
[raid0.A_rimage_1] chenbro0.1 iwi-aor--- 10.59t /dev/sdat(0) 1 0
[raid0.A_rimage_1] chenbro0.1 iwi-aor--- 10.59t /dev/sdav(0) 1 0
I cannot remove this LV for a while thus will not be able to recreate with -i for now, sorry.
Post by Brassow Jonathan
Post by lejeczek
hi boys, girls
man page reads: -i ...This is equal to the number of physical volumes to scatter the logical volume data....
I wonder, when I do not use -i while creating an LV with 10 phy devs.
$ lvcreate -n raid0.A --type raid0 -I 16 -l 97%pv
$ dbench -t 60 20
...
Throughput 112.309 MB/sec 20 clients 20 procs max_latency=719.409 ms
$ lvcreate -n raid0.A --type raid0 -I 16 -i 10 -l 97%pv
...
Throughput 83.2822 MB/sec 20 clients 20 procs max_latency=816.027 ms
And though the results would vary, xfs, a dbench for LV with no -i as an argument(which LVM chooses then to be 2) would always look better.
And I thought, as in the manual, always make stripes to go to all phy devices.
Question - is there some "little" magic LVM does? And if yes then how/what it is?
many thanks, L.
.
_______________________________________________
linux-lvm mailing list
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
_______________________________________________
linux-lvm mailing list
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
.
_______________________________________________
linux-lvm mailing list
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
_______________________________________________
linux-lvm mailing list
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
.
Brassow Jonathan
2017-09-18 16:10:19 UTC
Permalink
Post by lejeczek
Post by Brassow Jonathan
There is definitely a difference here. You have 2 stripes with 5 devices in each stripe. If you were writing sequentially, you’d be bouncing between the first 2 devices until they are full, then the next 2, and so on.
When using the -i argument, you are creating 10 stripes. Writing sequentially causes the writes to go from one device to the next until all are written and then starts back at the first. This is a very different pattern.
I think the result of any benchmark on these two very different layouts would be significantly different.
brassow
BTW, I swear at one point that if you did not provide the ‘-i’ it would use all of the devices as a stripe, such that your two examples would result in the same thing. I could be wrong though.
that's what I thought I remembered too.
I guess a big question, from user/admin perspective is: are those two stripes LVM decides on(when no -i) is the best possible choice LVM makes after some elaborative determination so the number of stripes(no -i) would, might vary depending on raid type, phy devices number and maybe some other factors or, 2 stripes are simply hard-coded defaults?
If it is a change in behavior, I’m sure it came as the result of some changes in the RAID handling code from recent updates and is not due to some uber-intellegent agent that is trying to figure out the best fit.

brassow
lejeczek
2017-09-23 15:35:08 UTC
Permalink
Post by Brassow Jonathan
On Sep 15, 2017, at 6:59 AM, lejeczek
There is definitely a difference here.  You have 2
stripes with 5 devices in each stripe.  If you were
writing sequentially, you’d be bouncing between the
first 2 devices until they are full, then the next 2,
and so on.
When using the -i argument, you are creating 10
stripes. Writing sequentially causes the writes to go
from one device to the next until all are written and
then starts back at the first.  This is a very
different pattern.
I think the result of any benchmark on these two very
different layouts would be significantly different.
  brassow
BTW, I swear at one point that if you did not provide
the ‘-i’ it would use all of the devices as a stripe,
such that your two examples would result in the same
thing.  I could be wrong though.
that's what I thought I remembered too.
are those two stripes LVM decides on(when no -i) is the
best possible choice LVM makes after some elaborative
determination so the number of stripes(no -i) would,
might vary depending on raid type, phy devices number
and maybe some other factors or, 2 stripes are simply
hard-coded defaults?
If it is a change in behavior, I’m sure it came as the
result of some changes in the RAID handling code from
recent updates and is not due to some uber-intellegent
agent that is trying to figure out the best fit.
  brassow
but if confuses, current state of affairs is confusing. To
~]# lvcreate --type raid5 -n raid5-0 -l 96%vg caddy-six
/dev/sd{a..f}
  Using default stripesize 64.00 KiB.
  Logical volume "raid5-0" created.
~]# lvs -a -o +stripes caddy-six
  LV                 VG        Attr       LSize   Pool
Origin Data% Meta%  Move Log Cpy%Sync Convert #Str
  raid5-0            caddy-six rwi-a-r---
1.75t                                   
0.28                3
  [raid5-0_rimage_0] caddy-six Iwi-aor---
894.25g                                                       
1
  [raid5-0_rimage_0] caddy-six Iwi-aor---
894.25g                                                       
1
  [raid5-0_rimage_1] caddy-six Iwi-aor---
894.25g                                                       
1
  [raid5-0_rimage_1] caddy-six Iwi-aor---
894.25g                                                       
1
  [raid5-0_rimage_2] caddy-six Iwi-aor---
894.25g                                                       
1
  [raid5-0_rimage_2] caddy-six Iwi-aor---
894.25g                                                       
1
  [raid5-0_rmeta_0]  caddy-six ewi-aor---
4.00m                                                       
1
  [raid5-0_rmeta_1]  caddy-six ewi-aor---
4.00m                                                       
1
  [raid5-0_rmeta_2]  caddy-six ewi-aor---
4.00m                                                       
1
VG and LV upon creating was told: use 6 PVs.
How can we rely on what lvcreate does when left to decide
and/or use defaults?
Is above example with raid5 what LVM is suppose to do? Is
it even correct raid5 layout(six phy disks)?
regards.
 ~]# lvs -a -o +stripes,devices caddy-six
  LV                 VG        Attr       LSize   Pool
Origin Data% Meta%  Move Log Cpy%Sync Convert #Str Devices
  raid5-0            caddy-six rwi-a-r---
1.75t                                    2.36               
3 raid5-0_rimage_0(0),raid5-0_rimage_1(0),raid5-0_rimage_2(0)
  [raid5-0_rimage_0] caddy-six Iwi-aor---
894.25g                                                       
1 /dev/sda(1)
  [raid5-0_rimage_0] caddy-six Iwi-aor---
894.25g                                                       
1 /dev/sdd(0)
  [raid5-0_rimage_1] caddy-six Iwi-aor---
894.25g                                                       
1 /dev/sdb(1)
  [raid5-0_rimage_1] caddy-six Iwi-aor---
894.25g                                                       
1 /dev/sde(0)
  [raid5-0_rimage_2] caddy-six Iwi-aor---
894.25g                                                       
1 /dev/sdc(1)
  [raid5-0_rimage_2] caddy-six Iwi-aor---
894.25g                                                       
1 /dev/sdf(0)
  [raid5-0_rmeta_0]  caddy-six ewi-aor---
4.00m                                                       
1 /dev/sda(0)
  [raid5-0_rmeta_1]  caddy-six ewi-aor---
4.00m                                                       
1 /dev/sdb(0)
  [raid5-0_rmeta_2]  caddy-six ewi-aor---
4.00m                                                       
1 /dev/sdc(0)
Post by Brassow Jonathan
_______________________________________________
linux-lvm mailing list
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
.
Brassow Jonathan
2017-09-25 21:36:07 UTC
Permalink
Post by lejeczek
Post by Brassow Jonathan
Post by lejeczek
Post by Brassow Jonathan
There is definitely a difference here. You have 2 stripes with 5 devices in each stripe. If you were writing sequentially, you’d be bouncing between the first 2 devices until they are full, then the next 2, and so on.
When using the -i argument, you are creating 10 stripes. Writing sequentially causes the writes to go from one device to the next until all are written and then starts back at the first. This is a very different pattern.
I think the result of any benchmark on these two very different layouts would be significantly different.
brassow
BTW, I swear at one point that if you did not provide the ‘-i’ it would use all of the devices as a stripe, such that your two examples would result in the same thing. I could be wrong though.
that's what I thought I remembered too.
I guess a big question, from user/admin perspective is: are those two stripes LVM decides on(when no -i) is the best possible choice LVM makes after some elaborative determination so the number of stripes(no -i) would, might vary depending on raid type, phy devices number and maybe some other factors or, 2 stripes are simply hard-coded defaults?
If it is a change in behavior, I’m sure it came as the result of some changes in the RAID handling code from recent updates and is not due to some uber-intellegent agent that is trying to figure out the best fit.
brassow
~]# lvcreate --type raid5 -n raid5-0 -l 96%vg caddy-six /dev/sd{a..f}
Using default stripesize 64.00 KiB.
Logical volume "raid5-0" created.
~]# lvs -a -o +stripes caddy-six
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert #Str
raid5-0 caddy-six rwi-a-r--- 1.75t 0.28 3
[raid5-0_rimage_0] caddy-six Iwi-aor--- 894.25g 1
[raid5-0_rimage_0] caddy-six Iwi-aor--- 894.25g 1
[raid5-0_rimage_1] caddy-six Iwi-aor--- 894.25g 1
[raid5-0_rimage_1] caddy-six Iwi-aor--- 894.25g 1
[raid5-0_rimage_2] caddy-six Iwi-aor--- 894.25g 1
[raid5-0_rimage_2] caddy-six Iwi-aor--- 894.25g 1
[raid5-0_rmeta_0] caddy-six ewi-aor--- 4.00m 1
[raid5-0_rmeta_1] caddy-six ewi-aor--- 4.00m 1
[raid5-0_rmeta_2] caddy-six ewi-aor--- 4.00m 1
VG and LV upon creating was told: use 6 PVs.
How can we rely on what lvcreate does when left to decide and/or use defaults?
Is above example with raid5 what LVM is suppose to do? Is it even correct raid5 layout(six phy disks)?
regards.
~]# lvs -a -o +stripes,devices caddy-six
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert #Str Devices
raid5-0 caddy-six rwi-a-r--- 1.75t 2.36 3 raid5-0_rimage_0(0),raid5-0_rimage_1(0),raid5-0_rimage_2(0)
[raid5-0_rimage_0] caddy-six Iwi-aor--- 894.25g 1 /dev/sda(1)
[raid5-0_rimage_0] caddy-six Iwi-aor--- 894.25g 1 /dev/sdd(0)
[raid5-0_rimage_1] caddy-six Iwi-aor--- 894.25g 1 /dev/sdb(1)
[raid5-0_rimage_1] caddy-six Iwi-aor--- 894.25g 1 /dev/sde(0)
[raid5-0_rimage_2] caddy-six Iwi-aor--- 894.25g 1 /dev/sdc(1)
[raid5-0_rimage_2] caddy-six Iwi-aor--- 894.25g 1 /dev/sdf(0)
[raid5-0_rmeta_0] caddy-six ewi-aor--- 4.00m 1 /dev/sda(0)
[raid5-0_rmeta_1] caddy-six ewi-aor--- 4.00m 1 /dev/sdb(0)
[raid5-0_rmeta_2] caddy-six ewi-aor--- 4.00m 1 /dev/sdc(0)
Yeah, looks right to me. It seems it is picking the minimum viable stripes for the particular RAID type. RAID0 obviously needs at least 2 stripes. If you give it 8 devices, it will still choose 2 stripes with 4 devices composing each “leg/image”. Your RAID5 needs 3 devices (2 for stripe and 1 for parity). Again, given 6 devices it will choose the minimum stripe count plus a parity. I suspect RAID 6 would choose at least 3 stripes and the 2 mandatory parity - a minimum of 5 devices.

Bottom line, if you want a specific number of stripes, use ‘-i’. Remember, ‘-i' specifies the number of stripes and the parity count is added automatically.

brassow

Continue reading on narkive:
Loading...