At the (home) office I have a Synology RS2416+ with a few drives running in Synology’s Hybrid RAID mode. I have a mix of Windows, Linux and Mac clients running Time Machine backups in addition to standard SMB and NFS file access.

The NAS itself has plenty of disk space and plenty of RAM but lately its performance has been abysmal and its gotten worse with every client I’ve added to the network.

Under any period of load with multiple clients writing - volume utilization would hit 100% and stay there. Apple’s Time Machine would take hours or sometimes days which meant that if I had two Macs backing up to the Synology the NAS would slow to a crawl:

Synology volume utilization with two Macs backing up via Time Machine at the same time
Synology volume utilization with two Macs backing up via Time Machine at the same time

One of the issues was self inflicted - when I first setup our Synology and was prompted to choose a RAID level I chose Synology Hybrid Raid 1 (SHR-1). This affords protection against a single disk loss. The big issue with SHR-1 (RAID 5) is that whilst read performance is fair, write performance is poor. This is because for every write to the RAID array, parity data needs to be calculated and written to every disk. Given my usage I would have been better off with RAID 10 which has much higher write performance but at the cost of capacity.

I decided to investigate whether Synology’s Solid State Drive (SSD) Cache feature would help. With an SSD cache in place it acts as a temporary store for both write and read data. When writes come into the NAS - instead of writing directly to spinning hard disk drives (HDD) they are written first to the SSD cache, then to the HDD array.

The big issue with hard disks are their poor performance with multiple clients and with random reads and writes. They have good sequential performance; when a single large file is being written in sequence but if lots of small files are being written randomly by multiple clients the spinning disk mechanism is ill suited for this load.

SSDs on the other hand have high performance with multiple clients and random reads and writes. They have no moving parts which need to ‘keep up’ with the load and in fact have multiple chips internally affording multiple reads and writes in parallel.

Using an SSD as a cache writes data first to the SSD then writes it to the HDD array more optimally with less random writes performed to spinning disks. It also allows the NAS to avoid being bottle necked since the SSD cache can handle writes whilst the hard disks ‘catch up’. This ensures clients achieve write performance limited by the SSD speed (which is much higher) rather than spinning disk speed.

I couldn’t find much real world performance information about Synology’s SSD cache feature for home office use so I decided to purchase a pair of Synology SAT5200 solid state disks and document what I found.

Synology SAT5200 solid static disks

First I took a look at sequential read/write performance using dd to measure how quickly the array can handle a single client writing a large file. This is the most ideal scenario for HDD:

Sequential I/O using dd over NFS
Sequential I/O using dd over NFS

Here we see high read and write performance using a HDD array. With the SSD cache in place read performance remains the same whilst write performance goes up slightly. If you generally have one client in your network writing large files a SSD cache won’t help you much.

Next we’ll measure random read and write performance with multiple clients using the tool Sysbench.

First up random reads and writes to the HDD array with no SSD cache:

Sysbench - random reads to a Synology Hybrid Raid-1 hard disk volume - no cache
Sysbench - random reads to a Synology Hybrid Raid-1 hard disk volume - no cache

Sysbench - random writes to a Synology Hybrid Raid-1 hard disk volume - no cache
Sysbench - random writes to a Synology Hybrid Raid-1 hard disk volume - no cache

Here the performance of reads and writes scale up until they reach a thread count (simulating number of clients) of 8 for writes and 32 for reads. Performance here in MB/s is pretty low with writes peaking at 7.35MiB/s and reads peaking at 1.07MiB/s.

During this test I also kept an eye on Synology Resource Monitor and evaluated the volume utilization, IOPs, transfer rate and CPU iowait:

Synology Resource Monitor - Transfer rate - HDD array
Synology Resource Monitor - Transfer rate - HDD array

Synology Resource Monitor - CPU iowait - HDD array
Synology Resource Monitor - CPU iowait - HDD array

Synology Resource Monitor - IOPS - HDD array
Synology Resource Monitor - IOPS - HDD array

Synology Resource Monitor - Volume utilization - HDD array
Synology Resource Monitor - Volume utilization - HDD array

In the above screens we can see volume utilization hits 100% and stays there. CPU iowait (percent of time the CPU is waiting for the disks to catchup) stays around 81%. Read IOPs bounce between 400/s and 600/s and write IOPS between 0 and 500/s. Finally the read transfer rate hovers between 3.5 and 5MB/s and writes between 0 and 20MB/s.

Now let’s take a look using the exact same benchmark but with an SSD cache in place:

Sysbench - random reads to a Synology Hybrid Raid-1 hard disk volume - SSD cache
Sysbench - random reads to a Synology Hybrid Raid-1 hard disk volume - SSD cache

Sysbench - random writes to a Synology Hybrid Raid-1 hard disk volume - SSD cache
Sysbench - random writes to a Synology Hybrid Raid-1 hard disk volume - SSD cache

Performance has gone up 4x thanks to the SSD cache. Whereas without it ranged between 25-250 reads per second, thanks to read caching we’re seeing reads ranging much higher at 180-900 per second - moreover they don’t fall off after 32 threads.

When it comes to writes we’re seeing a 4x improvement in performance with writes per second ranging from 2200 to 5600 with the SSD cache vs 600 to 1400 writes per second without. Performance here in MB/s is equivalent to 23.84MiB/s for writes and 7.1MiB/s for reads. You can also see that performance scales far more linearly as the number of threads is increased indicating that an SSD cache can handle an increasing number of clients far more effectively.

Again during this test we also kept an eye on Synology Resource Monitor and evaluated the volume utilization, IOPs, transfer rate and CPU iowait:

Synology Resource Monitor - IOPS - SSD cached array
Synology Resource Monitor - IOPS - SSD cached array

Synology Resource Monitor - CPU iowait - SSD cached array
Synology Resource Monitor - CPU iowait - SSD cached array

Synology Resource Monitor - Transfer rate - SSD cached array
Synology Resource Monitor - Transfer rate - SSD cached array

Synology Resource Monitor - Volume utilization - SSD cached array
Synology Resource Monitor - Volume utilization - SSD cached array

In the above we can see volume utilization hits 100% and bounces up and down far more frequently than without the cache. CPU iowait is halved at 43%. Read IOPs bounce between 200/s and 2000/s and write IOPS between 0 and 6000/s. Finally the read transfer rate hovers between 2.5 and 12.5MB/s and writes between 0 and 200MB/s.

And last but not least here’s a shot of the SSD cache user interface indicating the read cache hit rate:

Synology SSD Cache status
Synology SSD Cache status

Conclusion

From the above results we achieved a performance improvement of 400% across the board for both reads and writes. CPU wait time is halved and with the cache handling random IO first our hard disks should see reduced wear and tear - this of course occurs at the expense of the SSD.

Anecdotally Time Machine backups now take less than an hour whereas before we would constantly see clients timing out after days of attempting to backup. There isn’t a single Mac which has a problem backing up via Time Machine any more.

In addition we used to see write speeds to the Synology drop from 110MB/s (over Gigabit) to 10MB/s or lower whenever it was being accessed by two or more clients. With the cache in place now everyone sees full line speed writing to the Synology.

In the ZFS world it’s a no-brainer to add a SSD write cache to improve random write performance. In the Synology world I’m a little surprised it isn’t promoted more. I’d imagine it’s due to the targeting of the product sector, ie. less enterprise, more cost conscious consumer - and also the effect a cheap SSD can have on an array when it goes bad.

I went for Synology’s own SSDs - the SAT5200 range and sized them according to the Cache Advisor. Based on our results, if you’re struggling with write performance adding a (quality) pair of SSD cache drives might help solve your performance woes.