Instruction Manual - NAS SSD Cache

SSD Cache Setup for KVM on openSUSE Leap (QNAP TVS-h1688X)

🧰 Overview

  • System: QNAP TVS-h1688X with openSUSE Leap

  • Caching target: Logical Volume KVMSTORAGE_DATA0/test_server_root

  • Purpose: Speed up read/write performance for KVM VMs

  • Cache device: 4 SSDs in RAID-10, total 7.28β€―TiB (adjust if different)


🧾 Prerequisites

  • System has LVM volumes already configured:

    • KVMSTORAGE_DATA0 – stores KVM virtual machine disks

    • KVMSTORAGE_DATA1 – staging area

    • SUSE_SYSTEM – bare metal OS volume

  • 4 SSDs are installed in the NAS.


πŸ” Step 1: Identify SSD Devices

List all block devices and locate your SSDs (e.g., by model or size):

lsblk -o NAME,SIZE,MODEL,ROTA
πŸ”Έ SSD drives usually show ROTA 0 (non-rotational).
πŸ”Έ If sizes vary or names differ (/dev/sdb, /dev/sdc, etc.), adjust accordingly.

Optional: clear old RAID metadata:

sudo mdadm --zero-superblock /dev/sd[b-e]

🧱 Step 2: Create RAID-10 on SSDs

sudo mdadm --create /dev/md0 --level=10 --raid-devices=4 /dev/sdb /dev/sdc /dev/sdd /dev/sde

Monitor build progress:

watch cat /proc/mdstat

πŸ›  Step 3: Integrate RAID into LVM

Create physical volume and extend volume group:

sudo pvcreate /dev/md0
sudo vgextend KVMSTORAGE_DATA0 /dev/md0

πŸ“¦ Step 4: Create LVM Cache Pool

Using most of your 7.28β€―TiB SSD:

sudo lvcreate -L 150G -n cache_meta KVMSTORAGE_DATA0 /dev/md0
sudo lvcreate -L 7000G -n cache_data KVMSTORAGE_DATA0 /dev/md0

Adjust 7000G if necessary.


πŸ”— Step 5: Attach Cache to Virtual Machine LV

sudo lvconvert --type cache --cachemode writethrough \
  --cachepool KVMSTORAGE_DATA0/cache_data \
  KVMSTORAGE_DATA0/test_server_root
βœ… writethrough = safe for reliability.
πŸ” Switch to writeback for faster performance:
sudo lvchange --cachemode writeback KVMSTORAGE_DATA0/test_server_root

πŸ“ Step 6: Make RAID Persistent

Ensure RAID is reassembled on boot:

sudo mdadm --detail --scan | sudo tee -a /etc/mdadm.conf
sudo mkinitrd

πŸ“Š Step 7: Monitor Cache Performance

Script: lvmcache-stats.sh

βœ… lvmcache-statistics.sh

#!/bin/bash

# Usage: sudo ./lvmcache-statistics.sh /dev/mapper/<lv_name>
# Example: sudo ./lvmcache-statistics.sh /dev/mapper/KVMSTORAGE_DATA0-test_server_root

LV_PATH="$1"

if [ -z "$LV_PATH" ] || [ ! -e "$LV_PATH" ]; then
  echo "❌ Error: Please provide a valid logical volume path."
  echo "Usage: sudo $0 /dev/mapper/<lv_name>"
  exit 1
fi

DM_NAME=$(basename "$(readlink -f "$LV_PATH")")

# Run dmsetup status
STATUS_OUTPUT=$(sudo dmsetup status "$DM_NAME" 2>/dev/null)

if [[ "$STATUS_OUTPUT" != *"cache"* ]]; then
  echo "❌ Error: The given logical volume is not using LVM cache."
  exit 2
fi

# Parse output
read -r _ _ _ CTYPE META_BLOCKS BLOCK_SIZE CACHE_BLOCK_SIZE POLICIES <<< "$(echo "$STATUS_OUTPUT" | awk '{print $3, $4, $5, $6, $11}')"

CACHE_MODE=$(echo "$STATUS_OUTPUT" | awk '{print $9}')
USED_BLOCKS=$(echo "$STATUS_OUTPUT" | awk '{split($4, a, "/"); print a[1]}')
TOTAL_BLOCKS=$(echo "$STATUS_OUTPUT" | awk '{split($4, a, "/"); print a[2]}')

CACHE_USAGE_PERCENT=$(awk "BEGIN {printf \"%.2f\", (${USED_BLOCKS}/${TOTAL_BLOCKS})*100}")

echo "-------------------------------------------"
echo "πŸ“¦ LVM Cache Status for $LV_PATH"
echo "-------------------------------------------"
echo "Cache Type      : $CTYPE"
echo "Cache Mode      : $CACHE_MODE"
echo "Block Size      : $CACHE_BLOCK_SIZE KB"
echo "Metadata Usage  : ${USED_BLOCKS}/${TOTAL_BLOCKS} blocks"
echo "Cache Usage     : $CACHE_USAGE_PERCENT %"
echo "Cache Policy    : $POLICIES"
echo "-------------------------------------------"

βœ… Save and Use

  1. Create the script:

    nano lvmcache-statistics.sh
  2. Paste the script above, save with Ctrl+O, then Enter, then exit with Ctrl+X.

  3. Make executable:

    chmod +x lvmcache-statistics.sh
  4. Run it:

    sudo ./lvmcache-statistics.sh /dev/mapper/KVMSTORAGE_DATA0-test_server_root

Run it:

sudo ./lvmcache-statistics.sh /dev/mapper/KVMSTORAGE_DATA0-test_server_root

Example output:

LVM Cache report of /dev/KVMSTORAGE_DATA0/test_server_root
- Cache Usage: 63.2%
- Metadata Usage: 10.1%
- Read Hit Rate: 91.4%
- Write Hit Rate: 87.6%
- Cache Mode: writethrough

🧼 Step 8: Optional – Detach or Split Cache

To remove cache:

sudo lvconvert --splitcache KVMSTORAGE_DATA0/test_server_root

🧠 Notes

  • If SSDs are different sizes, consider using only matching pairs in RAID-10 (others may be underutilized).

  • Monitor SMART health of SSDs to avoid degraded caching performance.

  • Cache can be shared across multiple LVs in the same VG using the same cache_data.

  • This solution was implemented with the assistance of ChatGPT resources