For those who don't want to fiddle, I highly recommend a home NAS appliance. I have a Synology DS1511+, although there are other great options as well. Be prepared to spend some money for the sake of convenience. This isn't as cheap and dirty as adding drives and controllers to an old PC. +Piaw Na reminds us to consider what happens when your NAS appliance fails. Many NAS vendors use proprietary storage mechanisms that allow mixing disks of various sizes. Consider a NAS configuration that supports stock Linux software RAID and ext3/4 filesystems. You or your local Linux guru can then access your data if your NAS appliance fails.
Cheap and dirty is kind of where I'd like to focus. This means x86 hardware, Linux, near-line (or prosumer) SATA drives, and software-based data redundancy. Proper RAID disk controllers have battery backup for recovery after power failure and are prohibitively expensive for home use. Software RAID5 may not achieve trustworthy performance. The first rule of cheap and dirty SATA drives is that manufacturers will cheat to obtain better benchmark performance. Drives will cache writes, violate commands to sync ordered writes to disk, and fail very ungracefully on system or power failure. Your drives are optimizing for the common case where nothing ever fails. That's well and good until something fails...
Here are some tips for building that Linux-based file server. Thanks to +Yan-Fa Li for his additional pointers and reminders.
- Consider systems that support ECC (error correcting) memory. Some consumer AMD processors and boards used to support this. Intel Xeon systems generally support ECC, as do Core i3-32XX processors on certain motherboards. Data and filesystem metadata that is corrupted in memory never makes it onto your disks correctly.
- Disable write caching on your disk drives.
hdparm -W0 /dev/sdX
You'll need to automate this to run at boot time.
- Exercise your drives before adding them to your storage pool.
badblocks -v -w -s -c 1024 /dev/sdX
smartctl -t long /dev/sdX
Drives seem to follow a reliability bathtub curve. New and old drives seem more prone to failure. Check your drives before relying upon them.
- Consider your recovery requirements. What data can you afford to lose? What data can you reconstruct (re-rip optical media, re-download, etc.)? What data can suffer some temporary (hours, days) unavailability? What data must always be available?
- Enable ERC/TLER error recovery timeouts where possible when using multi-drive arrays. Consider near-line storage quality drives or better (compared to consumer drives) when building your storage arrays. The current WD Red series drives are practically aimed at the prosumer and small business mass storage markets.
- Remember that RAID0 isn't RAID; it's not redundant. RAID0 is only for transient or reconstructable data.
- RAID1 and variants (10, 1E) are a great choice when you can afford the drives and loss of capacity. Performance and reliability can be quite good. You're throwing drives at the problem and reaping the rewards.
- Software RAID5 is scarier than you might think. Data and reconstruction information is scattered across all drives in the array. How much do you trust your drives, controllers, and system/power stability to keep this all in sync? Putting most filesystems atop an unreliable RAID5 is a recipe for disaster. Battery-backed hardware RAID5/6 has its place. I'm reasonably convinced that software RAID5/6 doesn't. Beware the write hole.
- ZFS is cool on Solaris and FreeBSD. It's now even cooler with ZFS on Linux. ZFS RAID-Z can be a reasonable and reliable software replacement for hardware RAID5/6. You're not going to see blazing speeds, but you're getting end-to-end checksumming. If you want blazing speeds, get an SSD. +Yan-Fa Li mentioned that he gets 300MiB/s from his 6 drive ZFS setup, enough to saturate gigabit Ethernet. Maybe leave the competition to the folks over on [H]ard|OCP and consider your specific use cases.
- Btrfs is the next great Linux filesystem that somehow never arrives. If ZFS was licensed for inclusion in the Linux kernel proper, btrfs might just fade away. I've used both, and btrfs doesn't even seem to aspire to be as good as ZFS already is. Sorry!
- Runtime redundancy is no substitute for backups. What happens if an entire storage system is lost or fried? Consider maintaining a limited backup storage system on site and copying treasured data to the cloud. Companies historically store offsite backups in case of disaster. Cloud storage can provide offsite backup insurance for the rest of us.