For years I’ve struggled with the tradeoff between using high-performance, locally attached storage arrays to store my photographs, and the convenience of having them on a Network Attached Server (NAS). I’ve even tried various caching and syncing solutions to get the best of both. Until recently, though, affordable NAS solutions have been limited by either underwhelming CPUs, not enough RAM, or the physical limitations of 1-Gigabit Ethernet.
Now, there are several units on the market that have broken through all of those limitations, particularly by providing 10-Gigabit Ethernet in a form factor suitable for small office and home (SOHO) use. I’ve been reviewing one from Synology. The Synology DS1517+ is an extension to Synology’s line of compact 5-bay NAS units that adds support for a 2-port 10 Gigabit PCIe card.
The DS1517+ sports a speedy 2.4GHz, quad-core, CPU that provides more horsepower than entry-level units. You can get it with either 2GB or 8GB of DDR3 RAM, although you can upgrade it yourself to 16GB if needed. The 2GB version has a street price of around $ 750 (Buy on Amazon). The DS-1517+ is fairly future-proof. In addition to support for 10-Gigabit networking, you can add up to two DX517 expansion units, letting you house as many as 15 drives. If you need more bays, the DS1817+ provides three extra for about $ 150 more.
The DS1517+ pushes past the limits of 1-Gigabit LANs
I have a shelf full of various NAS units here in our office and studio, and most of them either couldn’t saturate our 1-Gigabit Ethernet, or were just barely able to. The DS1517+ kept its dedicated 1-Gigabit network connection pretty much saturated in all the benchmarks we ran, with only a 10-15 percent total load on the CPU. So the situation was definitely ripe for testing out its 10-Gigabit Ethernet module.
Making the jump to 10 Gbps showed itself immediately in read performance, with sustained read performance over 400MBps for 256K blocks accessed both sequentially and randomly, even from a single client. Single-client write performance was also improved, although not as dramatically, to over 130MBps from 95MBps on a 1-Gigabit connection. This is with the provided 2-drive test configuration. Later I bumped that up further by adding more drives.
The value of using 10-Gigabit for connectivity is pretty clear, even with a single client. However, the gains improve rapidly as you add multiple connections. With 1-Gigabit Ethernet, you can use Link Aggregation to improve total throughput beyond 1Gbps, but it is tricky to configure, can be expensive, and of course each individual client is still limited to 1Gbps. Fortunately, the price of 10-Gigabit adapters has come down, although 10-Gigabit switches are still quite expensive. At least you can at least get accelerated access for one or maybe two of your key workstations without breaking the bank.
Adding 10 Gigabit to your NAS
NAS vendors such as Synology and QNAP allow upgrades to 10-Gigabit through a PCIe adapter card. However, you need to make sure they offer a driver for the card you choose — or purchase one directly from them. Typically there is only 1 PCIe expansion slot, so you may need to decide whether to use it for an SSD cache or a network upgrade — unless the vendor has a card that provides both. For the DS1517+ Synology offers a choice of 10-Gigabit modules with one or two ports (E10G15-F1 and -F2) or an M.2 SSD x 2 module (M2D17). However, you can also choose from several supported third-party adapters.
If you’re okay with a more constrained configuration, QNAP offers an inexpensive entry-level NAS with a built-in single 10 Gigabit port, the new TS-431X. Unfortunately, it is only offered with SFP+, which probably means a new and expensive cabling system.
Which 10-Gigabit solution is right for you?
There are multiple cabling solutions for 10-Gigabit Ethernet. Two of them are possibilities for use with the DS1517+ (and the situation is similar for others on the market). SFP+ is a robust cabling system that provides solid connectivity. However, cables over a few meters long get expensive, and the cables are completely different from, and not compatible with, your existing RJ-45-based Ethernet.
Synology’s own 10-Gigabit modules uses SFP+. The DS-1517 supports a number of third party 10 Gigabit cards, one of which, the Intel X540-T2, uses standard RJ-45 cabling, provides a pair of ports, and sells for $ 200. That makes it my pick for the most cost-effective way to build a 10 Gigabit NAS using the DS1517+. You’ll want to use Cat 6A or Cat 7 rated cables, though, to ensure you can actually transmit 10Gbps over them. Interestingly, many users report remarkably good results even with lesser patch cables, so if you have existing wiring that will be difficult to replace, you might want to test over it before automatically replacing it.
Whichever cabling system you choose, you’ll find that there is a total lack of affordable 10-Gigabit switches. Until those show up on the market, you’ll probably want to directly connect your main client machine to your NAS over the 10-Gigabit connection, while also leaving the NAS connected to the rest of your network over 1-Gigabit links. Here you can see the advantage of a dual-port 10-Gigabit adapter. It lets you accelerate access from two of your machines (or with one client and another NAS unit).
Configuring your 10-Gigabit mini network
If you take the plunge and get a router or switch that can handle your 10-Gigabit connectivity directly, you can set it up the same way you would for any other network. However, if you go the less-expensive route of direct connecting one or two client machines to the 10-Gigabit ports on your NAS, you may need to do a little more work. In my case, while drivers for my card installed automatically, I did have to do some network configuration. Specifically, since I had left the 1-Gigabit LAN connections live on the NAS — so that other machines could also access it — I needed to tell my Windows 10 client to always use the 10-Gigabit connection to get to the NAS.
I did that by assigning IP addresses for the high-speed LAN ports on the Windows machine and the NAS on a different, but still private, network. Then I added the high-speed IP address of the NAS to the hosts file on my Windows machine (which is now not-so-conveniently located in \windows\system32\drivers\etc). Prior to making those changes, my client machine sometimes accessed the NAS on the 10-Gigabit network, but sometimes wound up on the 1-Gigabit connection. You can probably also play with route metrics, but this way seemed easy enough.
Performance: the payoff
The big question is whether all this upgraded technology pays off in real-world performance. Synology shows the DS1517+ (and its 8-bay sibling DS1817+) as capable of nearly 10GBps for reads and about half that for writes. However, that’s using all fast SSDs, multiple aggregated 10-Gigabit ports, a high-end switch, and multiple client PCs. Definitely not what a typical SOHO environment looks like. So I ran a series of benchmarks using both DiskMark and the same IOMeter utility that most NAS vendors use, but in more typical SOHO configurations, to get a sense of whether there are gains to be had in more normal use cases.
In short, for file-intensive applications requiring large amounts of shared storage, the answer is yes. With the pair of provided Seagate Ironwolf Pro hard drives configured in a mirrored array, I was able to achieve sustained read performance of over 400MBps, and write performance of around 140MBps, both of which exceed anything possible on a standard 1-Gigabit LAN — and faster than performance access similar drives locally. However, the problem with two mirrored drives is that you don’t get much improvement on writes, as all data has to be written to both drives. So I added two more Ironwolf Pro drives — creating a 4-drive array with 1-drive redundancy — and write performance jumped up to nearly 300MBps. Read performance also improved to about 440MBps, which is helpful, but not as dramatic.
As I mentioned above, you’ll see vendor-provided and Enterprise benchmarks that show faster performance, and often those are done with arrays of SSDs. However, those really aren’t useful in a SOHO environment, as they are relatively small and expensive. Both Synology and QNAP do support using SSDs either as dedicated volumes or as caches for hard-drive based volumes.
How RAID arrays can be so fast
For those new to NAS units, or to RAID arrays in general, it may seem hard to believe that access to data across a network can be faster than reading it from a local drive. One reason is because of the way processing and caching can be shared between your local machine and the server. But the bigger gain comes from the nature of RAID itself. Since data is split across multiple drives, reads in particular benefit from having two or more drives working on getting your data at once. Even the simplest RAID configuration — RAID 1, where two drives are mirrors of each other — essentially doubles read performance by using both drives. Larger arrays can further improve that performance by involving even more drives.
Testing the option of SSD caching
The DS1517+ provides the option of installing a PCIe card with M.2 SSDs for caching. However, since you can’t have both it and the 10-Gigabit card installed, I opted not to try to test the M.2 feature. I did test using some of the standard bays to house SSDs for a cache, using the Seagate XF1230 240GB drives Synology provided for review.
Surprisingly, with the relatively-simple benchmarks I ran using IOMeter and DiskMark, the SSD cache just got in the way. I suspect that is because the 8GB of system RAM provided sufficient cache for my testing. Over the course of a full day’s work, I’m sure the SSDs would provide benefits for even a small office, but clearly they’re more important for larger-scale deployments. If you have a spare SSD or two (you need two to enable write caching) and open bays, it’s probably worth at least experimenting with them. Synology provides an SSD Cache Advisor that will tell you how your workload matches up with your cache configuration.
All hard drives are not alike
SOHO applications don’t put the same strain on hard drives that Enterprise applications do, unless you are doing 24/7 video recording, or are allowing around-the-clock access to your NAS from the internet. So you might be tempted to cheap out and stick some standard desktop hard drives in your NAS. I did for years and never had any trouble, but I also found I needed to banish my NAS units to a closet, and could still hear them chatter through the door.
I was pleasantly surprised that I could barely hear our test DS-1517+ NAS’s Seagate Ironwolf Pro hard drives, even with the unit sitting on the desk right next to me while I benchmarked it. Plus you get all the performance and reliability tech that Seagate puts into them. That has convinced me to make them my new NAS hard drive of choice. They’re probably overkill for SOHO use though, so the non-Pro version will also do the job for about 25 percent less. If you’re on a tighter budget, Seagate’s Barracuda drive family will save you more, but aren’t designed for 24/7 use. I’ve also had good luck with Western Digital’s Red NAS drives, although since we’re thinking performance here, you probably want the 7200rpm Pro version.
Summary: Is it worth taking the plunge?
If you need to combine high-performance file access with network sharing, 10 Gigabit is definitely the way to go, at least the next time you’re in the market for a new storage server. Small offices that work on creating or managing content, including photographers, videographers, and creative agencies, fit this profile perfectly.