Instruction Manual - NAS SSD Cache
SSD Cache Setup for KVM on openSUSE Leap (QNAP TVS-h1688X)
π§° Overview
System: QNAP TVS-h1688X with openSUSE Leap
Caching target: Logical Volume
KVMSTORAGE_DATA0/test_server_root
Purpose: Speed up read/write performance for KVM VMs
Cache device: 4 SSDs in RAID-10, total 7.28β―TiB (adjust if different)
π§Ύ Prerequisites
System has LVM volumes already configured:
KVMSTORAGE_DATA0
β stores KVM virtual machine disksKVMSTORAGE_DATA1
β staging areaSUSE_SYSTEM
β bare metal OS volume
4 SSDs are installed in the NAS.
π Step 1: Identify SSD Devices
List all block devices and locate your SSDs (e.g., by model or size):
lsblk -o NAME,SIZE,MODEL,ROTA
πΈ SSD drives usually showROTA 0
(non-rotational).
πΈ If sizes vary or names differ (/dev/sdb
,/dev/sdc
, etc.), adjust accordingly.
Optional: clear old RAID metadata:
sudo mdadm --zero-superblock /dev/sd[b-e]
π§± Step 2: Create RAID-10 on SSDs
sudo mdadm --create /dev/md0 --level=10 --raid-devices=4 /dev/sdb /dev/sdc /dev/sdd /dev/sde
Monitor build progress:
watch cat /proc/mdstat
π Step 3: Integrate RAID into LVM
Create physical volume and extend volume group:
sudo pvcreate /dev/md0
sudo vgextend KVMSTORAGE_DATA0 /dev/md0
π¦ Step 4: Create LVM Cache Pool
Using most of your 7.28β―TiB SSD:
sudo lvcreate -L 150G -n cache_meta KVMSTORAGE_DATA0 /dev/md0
sudo lvcreate -L 7000G -n cache_data KVMSTORAGE_DATA0 /dev/md0
Adjust 7000G
if necessary.
π Step 5: Attach Cache to Virtual Machine LV
sudo lvconvert --type cache --cachemode writethrough \
--cachepool KVMSTORAGE_DATA0/cache_data \
KVMSTORAGE_DATA0/test_server_root
βwritethrough
= safe for reliability.
π Switch towriteback
for faster performance:
sudo lvchange --cachemode writeback KVMSTORAGE_DATA0/test_server_root
π Step 6: Make RAID Persistent
Ensure RAID is reassembled on boot:
sudo mdadm --detail --scan | sudo tee -a /etc/mdadm.conf
sudo mkinitrd
π Step 7: Monitor Cache Performance
Script: lvmcache-stats.sh
β
lvmcache-statistics.sh
#!/bin/bash
# Usage: sudo ./lvmcache-statistics.sh /dev/mapper/<lv_name>
# Example: sudo ./lvmcache-statistics.sh /dev/mapper/KVMSTORAGE_DATA0-test_server_root
LV_PATH="$1"
if [ -z "$LV_PATH" ] || [ ! -e "$LV_PATH" ]; then
echo "β Error: Please provide a valid logical volume path."
echo "Usage: sudo $0 /dev/mapper/<lv_name>"
exit 1
fi
DM_NAME=$(basename "$(readlink -f "$LV_PATH")")
# Run dmsetup status
STATUS_OUTPUT=$(sudo dmsetup status "$DM_NAME" 2>/dev/null)
if [[ "$STATUS_OUTPUT" != *"cache"* ]]; then
echo "β Error: The given logical volume is not using LVM cache."
exit 2
fi
# Parse output
read -r _ _ _ CTYPE META_BLOCKS BLOCK_SIZE CACHE_BLOCK_SIZE POLICIES <<< "$(echo "$STATUS_OUTPUT" | awk '{print $3, $4, $5, $6, $11}')"
CACHE_MODE=$(echo "$STATUS_OUTPUT" | awk '{print $9}')
USED_BLOCKS=$(echo "$STATUS_OUTPUT" | awk '{split($4, a, "/"); print a[1]}')
TOTAL_BLOCKS=$(echo "$STATUS_OUTPUT" | awk '{split($4, a, "/"); print a[2]}')
CACHE_USAGE_PERCENT=$(awk "BEGIN {printf \"%.2f\", (${USED_BLOCKS}/${TOTAL_BLOCKS})*100}")
echo "-------------------------------------------"
echo "π¦ LVM Cache Status for $LV_PATH"
echo "-------------------------------------------"
echo "Cache Type : $CTYPE"
echo "Cache Mode : $CACHE_MODE"
echo "Block Size : $CACHE_BLOCK_SIZE KB"
echo "Metadata Usage : ${USED_BLOCKS}/${TOTAL_BLOCKS} blocks"
echo "Cache Usage : $CACHE_USAGE_PERCENT %"
echo "Cache Policy : $POLICIES"
echo "-------------------------------------------"
β Save and Use
Create the script:
nano lvmcache-statistics.sh
Paste the script above, save with
Ctrl+O
, thenEnter
, then exit withCtrl+X
.Make executable:
chmod +x lvmcache-statistics.sh
Run it:
sudo ./lvmcache-statistics.sh /dev/mapper/KVMSTORAGE_DATA0-test_server_root
Run it:
sudo ./lvmcache-statistics.sh /dev/mapper/KVMSTORAGE_DATA0-test_server_root
Example output:
LVM Cache report of /dev/KVMSTORAGE_DATA0/test_server_root
- Cache Usage: 63.2%
- Metadata Usage: 10.1%
- Read Hit Rate: 91.4%
- Write Hit Rate: 87.6%
- Cache Mode: writethrough
π§Ό Step 8: Optional β Detach or Split Cache
To remove cache:
sudo lvconvert --splitcache KVMSTORAGE_DATA0/test_server_root
π§ Notes
If SSDs are different sizes, consider using only matching pairs in RAID-10 (others may be underutilized).
Monitor SMART health of SSDs to avoid degraded caching performance.
Cache can be shared across multiple LVs in the same VG using the same
cache_data
.This solution was implemented with the assistance of ChatGPT resources