Max Power
2015-01-11 09:50:30 UTC
Hi,
I am trying to figure out how I can improve write performance of a very
slow and complicated RAID-arragement (slowdrive=/dev/md123) with a fast
SSD/RAM-drive as a cache (fastdrive=/dev/sda4).
Write access to my server normally appears occasionally in my case (few
GB at a time and long idle times). So I want that every write access to
the resulting block device is committed to the cache first but I do not
achieve this. I read all the old mails about the tunables and the kernel
docs cache.txt and cache-policy.txt but I am doing something wrong at
migration_threshold set to a high number (99999999) I see write access
to the cache. But dd blocks until everything is written to the slow
device. It looks like a writethru.
Is it possible to influence the write/cache behavior?
Greetings from Germany!
I am trying to figure out how I can improve write performance of a very
slow and complicated RAID-arragement (slowdrive=/dev/md123) with a fast
SSD/RAM-drive as a cache (fastdrive=/dev/sda4).
Write access to my server normally appears occasionally in my case (few
GB at a time and long idle times). So I want that every write access to
the resulting block device is committed to the cache first but I do not
achieve this. I read all the old mails about the tunables and the kernel
docs cache.txt and cache-policy.txt but I am doing something wrong at
# fastdrive="/dev/sda4"
# slowdrive="/dev/md123"
# lvcreate --name cache --size 10G BigData $fastdrive
# lvcreate --name cacheM --size 128M BigData $fastdrive
# lvcreate --name data --size 100G BigData $slowdrive
# lvconvert --cachemode writeback --chunksize 1024K \
# --cachepool BigData/cache --poolmetadata BigData/cacheM
# lvconvert --type cache \
# --cachepool BigData/cache BigData/data
# dmsetup message BigData-data 0 migration_threshold 0 # see below
# dmsetup message BigData-data 0 random_threshold 0
# dmsetup message BigData-data 0 sequential_threshold 0
# dmsetup message BigData-data 0 write_promote_adjustment 0
# dd if=/dev/zero of=/dev/mapper/BigData-data \
# bs=1G count=1 oflag=direct
I monitored the last 'dd' with iostat and only see access to the slow# slowdrive="/dev/md123"
# lvcreate --name cache --size 10G BigData $fastdrive
# lvcreate --name cacheM --size 128M BigData $fastdrive
# lvcreate --name data --size 100G BigData $slowdrive
# lvconvert --cachemode writeback --chunksize 1024K \
# --cachepool BigData/cache --poolmetadata BigData/cacheM
# lvconvert --type cache \
# --cachepool BigData/cache BigData/data
# dmsetup message BigData-data 0 migration_threshold 0 # see below
# dmsetup message BigData-data 0 random_threshold 0
# dmsetup message BigData-data 0 sequential_threshold 0
# dmsetup message BigData-data 0 write_promote_adjustment 0
# dd if=/dev/zero of=/dev/mapper/BigData-data \
# bs=1G count=1 oflag=direct
# dmsetup status
# BigData-data: 0 209715200 cache 8 29/32768 2048 0/10240 0 230 \
# 0 2048 0 0 0 1 writeback 2 migration_threshold 0 mq 10 \
# random_threshold 0 sequential_threshold 0 \
# discard_promote_adjustment 1 read_promote_adjustment 4 \
# write_promote_adjustment 0
Essentially saying that my cache is empty. No dirty blocks. With# BigData-data: 0 209715200 cache 8 29/32768 2048 0/10240 0 230 \
# 0 2048 0 0 0 1 writeback 2 migration_threshold 0 mq 10 \
# random_threshold 0 sequential_threshold 0 \
# discard_promote_adjustment 1 read_promote_adjustment 4 \
# write_promote_adjustment 0
migration_threshold set to a high number (99999999) I see write access
to the cache. But dd blocks until everything is written to the slow
device. It looks like a writethru.
Is it possible to influence the write/cache behavior?
Greetings from Germany!