Sven Eschenberg
2018-08-31 16:28:58 UTC
Hi all,
I recently had to reboot one of my machines which is usually up 24/7. To
my surprise the activation of the lvm-raid volume failed. I am kinda
stuck finding the reason - the userland got updated regularly and
everything used to work as expected.
Version of lvm being used: lvm2-2.02.181
lvchange -v -ay TsR1/port
Activating logical volume TsR1/port exclusively.
activation/volume_list configuration setting not defined: Checking
only host tags for TsR1/port.
Creating TsR1-port_rmeta_0
Loading table for TsR1-port_rmeta_0 (253:0).
Resuming TsR1-port_rmeta_0 (253:0).
Creating TsR1-port_rimage_0
Loading table for TsR1-port_rimage_0 (253:1).
Resuming TsR1-port_rimage_0 (253:1).
Creating TsR1-port_rmeta_1
Loading table for TsR1-port_rmeta_1 (253:2).
Resuming TsR1-port_rmeta_1 (253:2).
Creating TsR1-port_rimage_1
Loading table for TsR1-port_rimage_1 (253:3).
Resuming TsR1-port_rimage_1 (253:3).
Creating TsR1-port
Loading table for TsR1-port (253:5).
device-mapper: reload ioctl on (253:5) failed: Invalid argument
Removing TsR1-port (253:5)
dmsetup table:
TsR1-port_rmeta_1: 0 512 linear 8:50 512
TsR1-port_rmeta_0: 0 512 linear 8:34 512
TsR1-port_rimage_1: 0 8388608 linear 8:50 1024
TsR1-port_rimage_0: 0 8388608 linear 8:34 1024
Running lvchange again:
lvchange -v -ay TsR1/port
Activating logical volume TsR1/port exclusively.
Activation of logical volume TsR1/port is prohibited while logical
volume TsR1/port_rimage_0 is active.
Since activation fails, shouldn't the changes done not be rolled back
entirely? Usually one would expect identical behavior on a second
invocation.
And thje master question: Why does the activation fail in the first place?
I recently had to reboot one of my machines which is usually up 24/7. To
my surprise the activation of the lvm-raid volume failed. I am kinda
stuck finding the reason - the userland got updated regularly and
everything used to work as expected.
Version of lvm being used: lvm2-2.02.181
lvchange -v -ay TsR1/port
Activating logical volume TsR1/port exclusively.
activation/volume_list configuration setting not defined: Checking
only host tags for TsR1/port.
Creating TsR1-port_rmeta_0
Loading table for TsR1-port_rmeta_0 (253:0).
Resuming TsR1-port_rmeta_0 (253:0).
Creating TsR1-port_rimage_0
Loading table for TsR1-port_rimage_0 (253:1).
Resuming TsR1-port_rimage_0 (253:1).
Creating TsR1-port_rmeta_1
Loading table for TsR1-port_rmeta_1 (253:2).
Resuming TsR1-port_rmeta_1 (253:2).
Creating TsR1-port_rimage_1
Loading table for TsR1-port_rimage_1 (253:3).
Resuming TsR1-port_rimage_1 (253:3).
Creating TsR1-port
Loading table for TsR1-port (253:5).
device-mapper: reload ioctl on (253:5) failed: Invalid argument
Removing TsR1-port (253:5)
dmsetup table:
TsR1-port_rmeta_1: 0 512 linear 8:50 512
TsR1-port_rmeta_0: 0 512 linear 8:34 512
TsR1-port_rimage_1: 0 8388608 linear 8:50 1024
TsR1-port_rimage_0: 0 8388608 linear 8:34 1024
Running lvchange again:
lvchange -v -ay TsR1/port
Activating logical volume TsR1/port exclusively.
Activation of logical volume TsR1/port is prohibited while logical
volume TsR1/port_rimage_0 is active.
Since activation fails, shouldn't the changes done not be rolled back
entirely? Usually one would expect identical behavior on a second
invocation.
And thje master question: Why does the activation fail in the first place?