Xen
2016-04-23 16:50:01 UTC
Attempting to join this list, but the web interface is down.
I am interested in using lvmcache but I hear the performance is very
meagre, that was a thread from 2014 that I was reading. But a recent
blog post said the same:
https://www.rath.org/ssd-caching-under-linux.html
This guy tries first lvmcache and performance is not noticable. Then he
uses bcache and it works like a charm.
He was using regular partitions on regular disks. Nothing raid going on.
But he tried to increase boot speeds and did not notice anything.
Setting the promote adjustments to zero made no difference.
Is this feature defunct? I mean, is this really a usable and functional
thing? From what it seems, I think not.
I'll have a small mSATA SSD to test with shortly, but... if this thing
is so bugged that it keeps reading from the origin device anyway,
there's not much point to it.
The reason I'm wanting to use it at this point is to speed up booting
(although unimportant) but mostly to make a more snappy system while
running, ie. for instance just application startup.
At the same time that would decouple my system from my data by using
that small SSD for the system (by way of the cache) such that system
seeks do not any longer effect the performance of other operations on
the device (that also holds that 'data').
It don't mind having writeback for this because any (small) delay in
writing to the origin would work well with this strategy.
Some IO is buffered anyway and you might think with 8GB of RAM you could
have some IO buffering normally, but a cache is something that buffers
between reboots, so to speak.
I'm simply trying to see what improvements I can get with LVM.
That aside, I think it is annoying that on Debian systems and Ubuntu and
Kubuntu, thin-provisioning-tools is still not included by default, and
you also need to create that initramfs hook to have the files included
into the initramfs. Then, Grub2 also does not support thin LVM volumes
at all and although you can boot fine on a thin root, grub-probe will
not be able to process these volumes, complain, and exit.
Personally I think snapshotting on thin is much more usable, if that's
what you want to use. No necessity to allocate sufficiently-sized
volumes in advance. I tried to ask on #lvm but they were not helpful.
I don't really get why LVM is being so neglected other than the
popularity of btrfs these days.
LVM is much more modular and very easy to work with normally. Commands
make sense for the most part and the only thing that is missing is a
nice gui.
(I was even writing... some snapshot to incremental backup script at
some point).
LVM is one of the saner things still existing in Linux from my
perspective, even if it doesn't feel perfect, but that is more due to
the fact that you are using software emulation of partitions in a way
that kinda tries to "avoid" having to do it in "firmware" -- in that
sense that you are trying to do what regular partitions can't.
Someone called this "deferred design" I believe.
Because of this fact I believe it cannot really actually fully work out
(for me) I believe particularly when it comes to cross-platformness and
encryption as, in the same way, I prefer a firmware/bios environment for
managing RAID arrays rather than just software.
At the same time with UEFI et al. my opinion is just the reverse: let
the boot loader please be software in some way. Even if it's a menu. At
least you can adjust your software, the firmware you might not have a
say about.
Deferred design, when Linux tools do everything a regular computer
firmware environment should. And then they created UEFI and tried to
take stuff away that does belong to the operating system.
While not creating anything that would work better except for GPT disks.
I just want to ask a question though about thin LVM, but I will ask in
another email.
I am interested in using lvmcache but I hear the performance is very
meagre, that was a thread from 2014 that I was reading. But a recent
blog post said the same:
https://www.rath.org/ssd-caching-under-linux.html
This guy tries first lvmcache and performance is not noticable. Then he
uses bcache and it works like a charm.
He was using regular partitions on regular disks. Nothing raid going on.
But he tried to increase boot speeds and did not notice anything.
Setting the promote adjustments to zero made no difference.
Is this feature defunct? I mean, is this really a usable and functional
thing? From what it seems, I think not.
I'll have a small mSATA SSD to test with shortly, but... if this thing
is so bugged that it keeps reading from the origin device anyway,
there's not much point to it.
The reason I'm wanting to use it at this point is to speed up booting
(although unimportant) but mostly to make a more snappy system while
running, ie. for instance just application startup.
At the same time that would decouple my system from my data by using
that small SSD for the system (by way of the cache) such that system
seeks do not any longer effect the performance of other operations on
the device (that also holds that 'data').
It don't mind having writeback for this because any (small) delay in
writing to the origin would work well with this strategy.
Some IO is buffered anyway and you might think with 8GB of RAM you could
have some IO buffering normally, but a cache is something that buffers
between reboots, so to speak.
I'm simply trying to see what improvements I can get with LVM.
That aside, I think it is annoying that on Debian systems and Ubuntu and
Kubuntu, thin-provisioning-tools is still not included by default, and
you also need to create that initramfs hook to have the files included
into the initramfs. Then, Grub2 also does not support thin LVM volumes
at all and although you can boot fine on a thin root, grub-probe will
not be able to process these volumes, complain, and exit.
Personally I think snapshotting on thin is much more usable, if that's
what you want to use. No necessity to allocate sufficiently-sized
volumes in advance. I tried to ask on #lvm but they were not helpful.
I don't really get why LVM is being so neglected other than the
popularity of btrfs these days.
LVM is much more modular and very easy to work with normally. Commands
make sense for the most part and the only thing that is missing is a
nice gui.
(I was even writing... some snapshot to incremental backup script at
some point).
LVM is one of the saner things still existing in Linux from my
perspective, even if it doesn't feel perfect, but that is more due to
the fact that you are using software emulation of partitions in a way
that kinda tries to "avoid" having to do it in "firmware" -- in that
sense that you are trying to do what regular partitions can't.
Someone called this "deferred design" I believe.
Because of this fact I believe it cannot really actually fully work out
(for me) I believe particularly when it comes to cross-platformness and
encryption as, in the same way, I prefer a firmware/bios environment for
managing RAID arrays rather than just software.
At the same time with UEFI et al. my opinion is just the reverse: let
the boot loader please be software in some way. Even if it's a menu. At
least you can adjust your software, the firmware you might not have a
say about.
Deferred design, when Linux tools do everything a regular computer
firmware environment should. And then they created UEFI and tried to
take stuff away that does belong to the operating system.
While not creating anything that would work better except for GPT disks.
I just want to ask a question though about thin LVM, but I will ask in
another email.