John Reekie
Technologicality, at work and play

I don’t like NAS (network attached storage)

This is a footnote to some of my HifiZine articles. I felt it better to put an explanation here than clutter up the articles.

First, what is NAS? According to Wikipedia, NAS is “a file-level (as opposed to block-level) computer data storage server connected to a computer network providing data access to a heterogeneous group of clients”. OK, so it’s a server that provides files to a computer over the network.

Another way to think of it is as a virtualized hard disk. An application running on the computer is probably not even be aware that the file storage it is accessing is elsewhere on the network.

By these definitions, I like NAS. What I don’t like is what “NAS” has come to mean when you buy a consumer-grade hardware/software “NAS”. Not to pick on any particular manufacturer but I’ll use Synology as I’ve owned a couple. I guess I have four issues with them:

  • Cost/performance. For the money, these devices come with pretty low-performing CPUs and low amount of RAM. A “powerful” CPU in this world is a low end Celeron.
  • Bloat. The original notion of “NAS” has been bloated to the point where these things have their own desktop and include their own package manager(s) that you can use to install all kinds of stuff… It looks great until you install some things and find out it doesn’t work that well or is hard to configure.
  • RAID/poor scalability. I suspect the traditional RAID in the typical 4-drive consumer NAS has had its day. When you run out of space, you have to buy a set of new higher-capacity drives and… throw the old ones away.(*1) If a drive fails, the actual process of rebuilding the array is the most likely time that a drive will fail! (And if it does, your data is toast.)
  • Reliability/single point of failure. Doesn’t matter how many drives you have, you still have only one power supply (remember: consumer-grade NAS). If that fails, no data access. That’s how my last Synology went, and why I won’t ever buy another.

(*1) Admittedly Synology did have an improved solution to this, their Synology Hybrid RAID. But you still have to replace at least two of the drives when you run out of space.

So what’s a better way – directly attached storage? No. I like networked storage. But I think these days, we need to think of networked services, of which storage is just one part. Point by point:

  • Cost/performance. Almost anything is better. But for storage, an ODroid HC1, HC2 or H2 has loads more bang/buck, and can easily be set up as a dedicated file server. (There will be loads of other examples, these are just the ones I use/am familiar with.)
  • Bloat. Start with a basic Linux server OS and add just the services you need. Don’t try and shoehorn all your networked services onto that poor little CPU in a NAS.
  • RAID/poor scalability. 1. Use realtime sync to mirror your files across multiple machines. If one goes down, your files are all there on another machine(s). 2. Have another storage node on the network and write to it daily with a snapshot type of backup system. 3. Use a distributed file system like MooseFS, one or two drives per node. If you need more storage, add drives instead of replacing them!
  • Reliability/single point of failure. One drive per node gives maximum power supply redundancy!



Leave a Comment