Discussion:
[linux-lvm] lvmcache performance
Nikolaus Rath
2016-02-07 18:43:30 UTC
Permalink
Hello,

My system boots rather slowly, so when I noticed a lot of free space on
one of my SSDs, I wanted to try to speed-up the boot by caching my root
volume.

For benchmarking purposes, I first moved the entire LV holding the root
file system to the SSD. This reduced the boot time from about 90 seconds
to 20 - nice!

I then moved the root fs LV back to the spinning disk and created a
cache-pool on the SSD. My root fs is 40 GB, with 19 GB used. The cache
pool had a 15 GB data and 15 MB metadata volumes and used writeback
cache. Finally, I converted my root fs LV to a cache LV.

To getting data into the initrd quickly, I also set up my initrd to call

dmsetup message ${name} 0 sequential_threshold 0
dmsetup message ${name} 0 read_promote_adjustment 0
dmsetup message ${name} 0 write_promote_adjustment 0


However, even after a few reboots I do not notice any difference at all
in boot times. It is as if the cache was not used at all.


With the above settings (and the cache pool as big as the used space on
the origin LV), I would have expected to get almost the same performance
as when moving the entire PV to the SSD.

Is that the wrong expectation? But even then, shouldn't I at least see
some improvement?

Thanks,
-Nikolaus
--
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

»Time flies like an arrow, fruit flies like a Banana.«
Joe Thornber
2016-02-08 09:17:06 UTC
Permalink
Post by Nikolaus Rath
Is that the wrong expectation? But even then, shouldn't I at least see
some improvement?
Firstly, if you have the latest software, I suggest you switch to the
smq cache policy which is generally out performing the old mq policy
substantially.

Secondly, I think you may need to reset your expectations a bit.
dm-cache is a slow moving cache. It monitors IO, detectings hotspots
on the disk and then 'promotes' those regions to the SSD. You may
find that the reads that occur during a boot, occur only infrequently
once the system is booted.

- Joe
John Stoffel
2016-02-08 16:51:07 UTC
Permalink
Post by Nikolaus Rath
Is that the wrong expectation? But even then, shouldn't I at least see
some improvement?
Joe> Firstly, if you have the latest software, I suggest you switch to
Joe> the smq cache policy which is generally out performing the old mq
Joe> policy substantially.

Is there a tool to help measure lvmcache performance metrics and
display how well the cache is working?

Joe> Secondly, I think you may need to reset your expectations a bit.
Joe> dm-cache is a slow moving cache. It monitors IO, detectings
Joe> hotspots on the disk and then 'promotes' those regions to the
Joe> SSD. You may find that the reads that occur during a boot, occur
Joe> only infrequently once the system is booted.

I'm personally using it for caching VMs and NFS data dirs. Seems to
be working well... but I wonder.
Nikolaus Rath
2016-02-08 22:05:24 UTC
Permalink
Post by Joe Thornber
Post by Nikolaus Rath
Is that the wrong expectation? But even then, shouldn't I at least see
some improvement?
Firstly, if you have the latest software, I suggest you switch to the
smq cache policy which is generally out performing the old mq policy
substantially.
Will try, thanks.
Post by Joe Thornber
Secondly, I think you may need to reset your expectations a bit.
dm-cache is a slow moving cache. It monitors IO, detectings hotspots
on the disk and then 'promotes' those regions to the SSD. You may
find that the reads that occur during a boot, occur only infrequently
once the system is booted.
Well, yes, but since in my case the cache is bigger than the origin,
shouldn't eventually *everything* end-up in the cache?

Is there a way to tell how often I need to access a block for it to be
promoted? I was hoping that the {read,write}_promote_adjustement
settings would actually cause promotion on the first access.


Best,
-Nikolaus

(No Cc on replies please, I'm reading the list)
--
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

»Time flies like an arrow, fruit flies like a Banana.«
Mateusz Korniak
2016-02-09 09:39:24 UTC
Permalink
Post by Nikolaus Rath
Well, yes, but since in my case the cache is bigger than the origin,
shouldn't eventually *everything* end-up in the cache?
IIRC you lowered *_promote_adjustment, but still full copy may be prevented by
sequential_threshold setting.

And how many blocks you have in cache after boot? [1]
Is number rising every boot?
Can you verify cache settings? [2]

[1]:
lvs -o+cache_total_blocks,cache_used_blocks,cache_dirty_blocks,
cache_read_hits,cache_read_misses,cache_write_hits,cache_write_misses,lv_health_status


[2]:
dmsetup status VG-LV
lvs -o+cache_policy,cache_settings,chunksize VG/LV
--
Mateusz Korniak
"(...) mam brata - poważny, domator, liczykrupa, hipokryta, pobożniś,
krótko mówiąc - podpora społeczeństwa."
Nikos Kazantzakis - "Grek Zorba"
Loading...