Saturday, June 1, 2013

Developments in Home Streaming

After 10+ years of building and running home theater PC boxes (HTPCs), cheap streaming appliance frontends may finally be good enough to present a variety of local and Internet content on 1080p displays. This blog post provides a brief summary of the hardware and software solutions I've recently abandoned, and some suggestions and hints toward what might approach video Shrangri-La.

In mid-2012 I discontinued use of my TiVo HD in favor of a MythTV setup with HDHomeRun network attached tuners. For nearly a decade, I'd run TiVo series 1 and 3 (HD) boxes for my cable recording and living room display needs. The old TiVo was a life-changer. The original TiVo team built a brilliantly usable interface to fundamentally complex functionality. Unfortunately, TiVo has become something like the Palm Computing of DVRs. The current devices aren't that much more advanced than the original devices in terms of actual interface and practicality. It's like TiVo laid off a brilliant team of engineers and failed to foster partnerships in the industry. When my local cable provider switched to digital-only, I ditched the TiVo rather than fighting with CableCARD and re-committing to TiVo. I will not support the cable industry's lockdown of content by turning every DVR or cable box into an additional source of revenue for the cable monopoly. This is now the age of Ethernet and Internet streaming. I refuse to acknowledge my cable provider as anything but an Internet pipe and source of local broadcast programming. I realize that many get the extended or premium cable TV packages. I'm done with that. I'll get my HBO and Showtime series over DVDs or streaming services long after the programs initially air. Considering most network's proclivities to cancel new programs, I'm willing to wait a couple years for those programs that pass the gauntlet, and receive them at a time-discounted price.

I've run local streaming server and client HTPCs for years. PCs, especially Linux and open source PCs, have offered the freedom to store and stream personally owned/managed content for years. Just as the TiVo was my DVR, my HTPCs happily served up my personal AV archive and DVDs to any display in the house.

This year, I've experimented with devices like the MiniX Neo X5 (Chinese, Android-based TV box) to act as cheap clients to attach to HDMI displays. I also built a new Windows 8 HTPC to access XBMC, MythTV, and streamed Netflix and Amazon content: http://travesh.blogspot.com/2013/03/super-slim-intel-htpc-build.html. In the end, neither Android nor Windows boxes are what I want attached to my displays. I want set-top boxes that work like appliances and have custom hardware and Linux underneath. I don't want to futz with custom Android firmwares and the intricacies of hardware accelerated playback, nor the complexities of a full Windows box with an awkward combination of Metro apps and some (annoyingly) browser accessed content. What I want, and what I suspect many want, is something akin to the old TiVo appliance that can seamlessly access both local media and current Internet streaming services. Nothing else will do. This past week, I repurposed my long-serving AMV/Nvidia (MythTV) HTPC to be a network firewall/router and bastion host. I expect to repurpose my new Windows 8 HTPC as a network server this coming week. PCs are finally becoming overkill for driving 1080p displays; they're still too much like PCs and not enough like appliances.

My current preference for driving displays is the Roku 3. The works like an appliance for the sources I use: Netflix, Amazon, Aereo, and Plex. My Synology box now acts as a very adequate Plex server for my own content, and can be integrated to work with my MythTV backend. Netflix, Amazon, and Aereo can stream at 720p or 1080i/p. The Roku has a great appliance interface and a great remote control. I love being able to attach headphones to the Wi-Fi remote control itself for quiet late-night viewing.

In the end, here are my recommendations: A Roku 3 for each dumb display, a MythTV backend for cable-based content, a Plex server for your personal media archive, and portable devices to shore up any other areas. Aereo may be here for the long haul, or may be a flash in the pan. MythTV and ClearQAM or CableCARD also have an uncertain future. Transcode or pop your archive and TV content onto a Plex server to stand the test of time. The world of broadcast and streamed video is changing rapidly. I generally trust the Roku (version 3) and Plex to stand the test of time. This may change. ;-/. I'll be right there to inform as changes occur. As always, I'd love to hear on G+ what's currently working for everyone else or what direction you're heading. Cable and Internet video is always a moving target. This will presumably settle down at some point, but we're in a period of disruption and business and legal change. I simply hope you AV enthusiasts out there are navigating a path that works for your and your family! I'd love to hear your chosen path(s). Thanks for listening during these unsettled times!

Tuesday, May 28, 2013

Old New England Homes

[Veering off my normal technical-only discussion today due to impending hot weather forecast for my part of the world later this week.]

My home in New Hampshire was built around 1850 to replace the run-down old garrison just to the north. The story is that some salvaged materials (bricks, timbers, boards?) from the old garrison were used in the construction of the "new" farmhouse. The garrisons were fortified to repel Indian attacks and abductions in the area into the mid-1700s. My family roots in this area run deep, and numerous ancestors were killed or abducted, especially around the time of the Dover, NH Massacre of 1689. Abductees were typically taken north to Quebec and held for ransom. One of my ancestors was a "heroine" of the Dover Massacre; another declined to return to her abusive husband after abduction to Quebec. There's a lot of history here. Of course, documented historical roots don't run as deep here in the New World as in Eurasia. I recall staying at an old, old inn in Wales in 2006 where one corner of the room was obviously several inches higher than the other.

Like many old New England homes, my home has a borning room. This is a small room just off the kitchen/hearth where mothers would give birth, infants would be tended, and the sick and elderly would be provided solace. The hearth was the living center of the old New England homestead. This makes special sense given the history of the Little Ice Age in North America. My home, although not especially large for the time, has a vaulted brick arch in the basement to support the dutch oven and fireplaces on the first floor. My paternal grandparents acquired this home in a somewhat run-down and unimproved state (no electricity, plumbing) in the early 1940s. My 3rd great grandfather had lived in this same home when it was relatively new. Homes "in town" during the same period would have had better amenities, higher ceilings, grander staircases, etc. compared to rural farmhouses. On the plus side, I have almost 500 acres of conservation land in my back yard.

For the past couple years I've been sleeping in the small borning room off the long, farm-style kitchen. This room has space for a small bed against three walls, a nightstand and dresser. It also happens to be cold in the winter and hot in the summer due to a southeast location and little insulation. Many new homes aren't built to last, but at least they're using a sensible layout and insulation. Surviving old homes were built with quality materials, yet impose compromises for modern living. With a warm weather forecast for later this week, I've decided to change to the upstairs bedroom. I've ordered a small window air conditioner to hopefully arrive and be installed before hot weather strikes on Thursday.

When my grandparents acquired the house, the upstairs was largely unfinished. My grandmother had most of the upstairs built out as a separate apartment some years after my grandfather passed away. The current tenant has been there for almost 30 years. I retain one bedroom at the top of the (steep) front stairs with no upstairs bathroom access. This certainly isn't the house as I want it to be. That said, I have a great tenant and will be gradually making practical improvements. For now, I need to figure what should be moved to the upstairs bedroom to make it habitable. I have a bed, lamp, and nice closet but desperately need a bedside stand and dresser. Longer term, I'd love to figure out either a fix to the steepness of the stairs or a separate upstairs bathroom situation. It's all possible, but starts to run into real money. At some point, I run the cost-benefit analysis as to whether it's better to buy/build the home I want and force renters to deal with the quirks of the old farmhouse.

Sunday, May 26, 2013

Cutting the Cable?

In this installment, I'll discuss the gradual progression that's been driving my brand of AV enthusiast away from the cable TV monopolies and onto the Internet. I'll also focus on Aereo's local broadcast DVR service in both a mini-review and in the larger landscape.

The more you tighten your grip, the more star systems will slip through your fingers.
- Princess Leia

This quote often springs to my (admittedly twisted) mind when discussing the lengths to which media distributors and cable monopolies such as Comcast, Time Warner, etc. will go to maintain control in a world of cheap recordable media and broadband Internet service. On the Internet, everything is just (potentially encrypted, unrecognizable) data. The media genie has long escaped the bottle. Somehow the RIAA and music distributors (MB/song) got the message, but the MPAA and cable companies (GB/video) are still refusing to give ground. I recall being young and poor and not wanting to pay for intellectual property I couldn't afford anyway; this is still the third world argument for rampant IP piracy. It's not a terrible argument. If duplication/distribution costs are negligible and legal distributors won't operate in your region, why should distributors be able to restrict your access to information and entertainment? Is infotainment for the producers or the audience? It takes both!

I now have the money to legally access the media I desire and the value system to resent overly restrictive controls and obsolete business models propped up by bad laws. Here is my manifesto.
  1. I will pay for the content I want if the terms are at all reasonable. I have bought (sometimes rebought) much of my digital music on iTunes and Amazon. I pay for Netflix and Amazon streaming.
  2. I am reluctant to pay a one-time fee to "own" or stream a piece of DRM restricted content.
  3. If the content owner will not sell into a region, I do not believe piracy in that region is wrong. Information wants to be free.
  4. I will not pay dozens of monthly fees to get the content I want. There is absolutely a place for content aggregators and streaming services.
  5. Whenever one distributor/studio "takes their toys and goes home" by establishing a separate service for their content, I will boycott that distributor. You chose to be difficult; there are other fish in the sea.
  6. I will not step back from the functionality of a TiVo to skip commercials or replay content. I will not pay money for locked-down content.
Wouldn't it be great if the industry could just work out a reasonable plan for both DRM-free video downloads and cross-platform video streaming? Kudos to Apple for absolutely forcing this on the music distributors back in the day and actually saving their revenue streams in a changing world.

Then there's the broadcast networks and cable companies still trying to eek out an existence from advertising, cable transmission fees, and monopolies and legal bullying. Since there's apparently no reasoning with all the broadcast networks at once unless you're a cable monopoly, we now have companies like Aereo. In case you haven't followed the drama, Aereo is trying to work within the law to provide each subscriber with their own tiny TV antenna and DVR streaming service in their local broadcast area. Cloud-based DVR streaming already has legal precedent behind it. Aereo's service is especially useful for those viewers on the outskirts of the metro broadcast area (like me) or with physical obstacles in the signal path (like me) or with intractable landlords. Aereo is specifically not for people who wish to receive extended or pay cable channels; it's for cable cutters for whom a physical antenna is difficult or impossible. You're paying Aereo a monthly fee to host your antenna and DVR. There's some interesting technology behind this, but the legality seems fairly straightforward. The lawsuits are flying and Aereo has generally been prevailing. Fox and CBS have threatened to pull their broadcast stations if Aereo ultimately prevails.

If Aereo is able to grow quickly enough to establish a significant subscriber base, they may be able to negotiate with the networks just as the cable monopolies do now. This is certainly my hope. This would also allow Aereo to reduce their resource requirements by storing a single (replicated for redundancy and performance) stream for each provider with whom they have an agreement. It's almost a given that broadcast TV will go away in the next decade, freeing up the spectrum for other uses. The FCC will need to change the regulatory landscape to provide some degree of free access to public news and alerts on TVoIP.

If I haven't mentioned much about the Aereo service itself, it's because it pretty much works as advertised. I'm on the $12/mo plan that provides two simultaneous channels and 60GB of DVR space. I'd love to know more of the technical details behind how they make their system work. There's apparently some serious transcoding going on at recording/viewing time. They're almost certainly not doing data deduplication for video storage; this would add a legal gray area, and they're absolutely trying to stay legal. I've tried viewing both on a laptop screen and across a room on a TV. Close viewing shows very obvious macroblocking artifacts, visible interlacing on some content, and horizontal line artifacts where 1080i content is being naively downscaled to 720p. At TV viewing distances, most issues become "good enough" for this non-videophile. Lack of macroblock dithering/blending for large areas of a similar colors are still visible. It may be possible to eliminate much of this by properly calibrating display brightness levels. I have not observed frame skipping or lengthy buffering problems.

Aereo is presenting a somewhat specialized solution. If you're tied to cable Internet, bundled basic cable may be a better deal. If you want cable/satellite-only channels, you'll need cable or satellite. I'm no longer willing to put a locked-down, slow, and awful cable box on each of my displays. The cable company has tightened their grip with encrypted digital content and threatened removal of unencrypted broadcast content (ClearQAM) formerly mandated by the FCC. The cable companies are trying to drag us back to the bad old days of the phone company ("Ma Bell") when everyone had to lease each and every phone from the monopoly provider. Grasping behavior like that has me slipping through their fingers.

Sunday, May 19, 2013

Overthinking Your Network

[In this installment, I discuss the overwhelming tyranny of choice faced by the home networking power user who is starting to consider enterprise approaches (VLANs, routing protocols, multi-WAN, redundancy, and failure bypass options) for what is fundamentally a home network.]

Most home Internet users have a DSL or cable modem and a Wi-Fi firewall/router. Power users may run thirdparty router firmware or a dedicated PC firewall/router alongside dynamic DNS registration and port forwarding for a few services. Many power users want to be able to securely connect to services on their home network from the outside. This may be the age of cloud computing, yet it's also the age of increased home automation. There are turnkey home automation systems that use cloud servers for control via mobile apps and browsers, there are DIY automation systems, and there probably won't be standards for home control, monitoring, and security via the Internet for years to come. Secure outside connection to one's home network is pretty much the hallmark of the power user.

I'm currently on a two year Comcast Business Class contract. This gives me five public static IP addresses, not including the Comcast gateway box itself. Mo' IPs, mo' problems. Having these separate external IPs gives me dreams of putting my outgoing home traffic on one, my VoIP PBX on another, a wireless guest network on a third, incoming services on a fourth, and admin/automation on a fifth. It's not like I need all these addresses, but I sure like the idea of them...

Then there's the issue of VLANs. All my network switches are "smart(ish)" and support VLANs:

  • 8 port GigE PoE switch
  • 8 port GigE smart switch
  • 24 port GigE smart switch #1
  • 24 port GigE smart switch #2
VoIP phone jacks will obviously be patched to the PoE switch. These ports will default to the "home networking" VLAN and can recognize phones to put them on a separate VoIP VLAN. I want secure Wi-Fi access to my home networking VLAN and separate unprotected guest Wi-Fi with captive portal. Guests would have to VPN in to the secure home network if necessary. I want some Internet services to be provided from server(s) in a DMZ, which could end up as one or more VLANs.

None of this really gets me to a network architecture yet, but I'm starting to converge on a set of rules or guiding principles:

  1. Expose as few boxes as possible on the public Internet and secure the hell out of those boxes. I'll probably end up with a single firewall/router handling all my public IPs. Providing multiple external points of intrusion to internal networks seems like a bad idea.
  2. Secure administrative access to all your equipment. Imagine the following scenario: An attacker gains access to a DMZ server, scans every possible IP in the local network, identifies switch hardware by MAC address, connects to admin interface of said hardware via default user/password or reconfigures local interface to same network as default admin IP and connects. Security by VLAN: gone.
  3. Use 1:1 NAT and/or PAT and private addressing for everything behind the Internet point of entry. This seems more flexible than a transparent/bridging firewall and could accommodate an external transition to IPv6 while continuing to support legacy IPv4 devices internally.
  4. Prefer VLAN separation to a single DMZ network to prevent one server acting as a point of intrusion from which to launch attacks on other servers. Some switch vendors have features that can provide this isolation without the hassle of VLANs; sadly, I'm not paying for Cisco.
  5. Don't NAT between internal networks. This destroys the audit trail and security of knowing exactly which internal client is connecting to your internal services.
  6. Keep internal and external equipment separate. Let's say your external firewall/router fails. Internal networking and VoIP calls should continue to function. Let's say an internal network switch fails. Guest Wi-Fi access to the Internet should continue to function.
  7. Carefully identify single points of failure and their importance. Have a plan for manual network reconfiguration in degraded mode and/or spare equipment or virtual machine instances to swap in.
  8. Prefer enterprise equipment at reliability choke points. A consumer Wi-Fi router might not be your best choice for your core or sole external firewall/router. Don't let your wired network go down when your Wi-Fi overheats and power cycles.
A home network usually doesn't have the same luxuries as an enterprise network. We're not generally going to keep spare equipment lying around and we don't have rapid response service contracts. Still, there are approaches to ease the pain. Consider buying two smaller switches over a single large one; how many ports do you really need at one time? Consider building virtual machine instances for critical services and be able to spin up and patch networking into VMs when hardware fails.

As always, I welcome criticism and suggestions from real experts. I'm kind of trying to have a dialog with myself about how to build out my own network, but a dialog with others is even better!

Sunday, April 28, 2013

Home Networking for Enthusiasts

As summer approaches, many people are thinking about outside projects and planning for holidays. As an IT geek, I'm planning a more professional and capable home networking setup.

I live in a 160+ year old farmhouse. Electrical wiring upgrades have occurred as needed since the house was first fitted for electricity by my grandparents in the 1940s. When ownership passed to me in 2007, I couldn't even get insurance without upgrading the old fuse boxes (separate upstairs and downstairs apartments) to modern breaker boxes. I'd think they'd be more concerned about old wiring. Thankfully, it's all post-1940.

My MPOE (telco, cable Internet, and power) is in the southwest corner of the basement. The basement itself has a poured concrete floor, mostly reinforced walls (via concrete block) and both granite and old clay brick features. The "ceiling" is mostly less than 6' with fixtures that extend below that level. This is an old "working" basement rather than a finished one. There is a drain at the south wall and up to a half inch of water on the floor during a wet spring. This spring has been quite dry, thus no water on the floor.

High airflow computer power supplies hate humidity. I've stupidly burned through a couple discovering this. Low airflow and fanless systems seem to accumulate enough dry heat to survive. Since the basement is both my MPOE and an otherwise naturally cooled, out of the way space, this seems like the place to put servers and networking equipment. The humidity calls for a basement dehumidifier. Even without a server rack, the mold and deteriorating brick situation likely call for a dehumidifier. I can set humidity at a reasonable level and run collected water into my existing drain.

Whether your "server closet" is a basement, closet, attic or something else, consistent (lowish) temperatures and low humidity are a necessity. I'm somewhat jealous of my Northern California friends; even during the winter wet season, a section of an attached garage probably works just fine as long as heat can be exhausted upwards or out.

I have something of a fetish for rackmount equipment. For many, a simple 12U wall rack would work just fine. For serious geeks, nothing less than 42U will do. I'm somewhere in between, let's say 25U. I have an arch space under my chimney that is centrally located, not far my MPOE, and otherwise unused. We'll see whether I can use that or need to run exhaust heat to an exterior window. It might actually be easier to run exhaust heat up the chimney along with the furnace exhaust.

I started out thinking that my Asus RT-N66U might be capable enough to act as my main firewall/router. Now I'm thinking I need a more elaborate setup:
Cable Internet Gateway => External Firewall/Router => DMZ => Internal Firewall/Router
Equipment is not a problem; I have a ton of old, unused equipment perfectly suited for routing and filtering packets. Cases are a bit of a problem. I strongly prefer rackmount to a bunch of generic PC and consumer appliance cases. I want my rack to be beautiful; call it what it is: a pointless fetish. I demand satisfaction. ;-P

Rackmount cases are seriously expensive (quality 1U => $200 USD), presumably because they're specialized, enterprise equipment. I'm currently looking into repurposing some old 1U Cobalt RaQ3 cases to hold mini-ITX boards. Christian Vogel has kindly shared information about the front panel pinout of these cases on his blog. This is very cool, as I've been wanting to get into Arduino development and interfacing for a while anyway.

As always, feedback here or on G+ is welcome. Let's see some pictures of your setups. How are you managing environmental factors for your home equipment? What are your tips for those who are inclined to be your own networking and server gurus at home? As a side note, I've been gradually moving critical services to the AWS cloud, yet I still need decent networking, security, storage, media servers and playback, and home automation at home. I suspect others are in the same boat. Let's hear about it!

Sunday, April 7, 2013

ZFS Storage Conversion Postmortem

Until recently, my home data storage "solution" consisted of a single PC running Arch Linux and 3x largish SATA drives running JFS in a JBOD. The Linux box also runs my MythTV backend, and the whole setup evolved a bit organically. I'm willing to lose the TV recordings if something fails. CD and DVD rips, photos, and documents are replicated between the Linux server, Windows desktop, and Mac portable in a very manual and haphazard way. I've been lucky not to lose anything important.

With my (personal) office equipment now moved back home from my former telework space, it's time to get my data better organized and replicated. I'm documenting my specific hardware and software configuration here, mostly as a log for future me. There may be generally useful information hidden among the specifics; if so, I apologize in advance for making you sift for it.

Newly available equipment:

  • 9x 1TB WD Black drives purchased between late-'08 and early-'10 for an aborted storage build
  • 2x 3TB WD Red drives purchased a couple months ago
  • Synology DS1511+ 5x bay NAS chassis
  • Dell PowerEdge T610 server that had been used for work-related virtualization

The WD Blacks got some use in the NAS and as temporary backup drives; some just sat in a drawer for a couple years. :-/

The Dell T610 is total overkill for most of what I do now.

  • 2x Intel Xeon E5649 processors at 2.53GHz, yielding 12 cores
  • 48GB (6x 8GB) ECC DDR3 RDIMMs
  • 2x Broadcom NetXtreme II GigE NICs
  • 2x Intel ET GigE NICs
  • SAS 6/iR (LSI 1068e) storage controller
  • 2x 250GB Samsung drives in a hardware RAID1 system mirror
  • 6x drive blanks; Dell likes to sell high-margin drives with their caddies
So I wanted to turn the Dell into a backup storage and development virtualization box that is used mostly on an as-needed basis. I went into this plan without much of a sense of what system power usage would be, just that it would be high compared to my other systems.

Dell T610 disrobed w/ caddies and drives
The Dell was initially running Debian 6. My actual work went something like this:

  1. Take the faceplate off the Dell to discover 6x drive blanks. Um, yay.
  2. Discover that drives will function without caddies as long as I don't jostle anything.
  3. Try 3TB WD Red in the Dell only to discover the controller maxes out at 2TB.
  4. Decide to add 6x 1TB WD Blacks to the Dell, since that should work.
  5. 3x of those WD Blacks were running a RAID5 in my Synology NAS.
  6. Place order for 8x cheap drive caddies before things go much further.
  7. Start rsync backup of Synology NAS to my old MythTV server.
  8. Start serious burn-in test of 3x unused WD Blacks in the Dell.
  9. Dissolve RAID and yank 3x WD Blacks from the NAS after rsync completes.
  10. Hey, those caddies got here fast!
  11. Install 2x 3TB WD Reds in the NAS as a pure Linux RAID1.
  12. Start rsync restore from the MythTV server to the Synology NAS.
  13. Get 6x WD Blacks into Dell caddies and installed.
  14. Start less serious burn-in of the 3x WD Blacks that had been in the NAS.
  15. Install Debian Wheezy onto the Dell system drive mirror /dev/sdg.
  16. Broadcom bnx2 requires non-free firmware; switch to Intel NIC for install.
  17. Figure out how to make grub write the boot record to sdg not sda.
  18. Install modules and tools from ZFS on Linux; that was easy!
  19. Struggle to zero out the WD Blacks w/ mdadm, dd, and parted.
  20. Create a RAID-Z1 pool with 5x WD Blacks and one warm spare.
  21. Install cpufrequtils to reduce system power usage.
  22. Begin testing the hell out of the ZFS pool.
Idle power usage with 8x drives is 173W. I still want to abuse ZFS a bit more by running through simulated failure scenarios. I've used ZFS on Solaris and FreeBSD, but never on Linux before this. So far so good. There's still work to do setting up automated backups between the Dell and the NAS and generally getting my data more organized. At least there's now a framework in which that can happen.

Wednesday, April 3, 2013

Home Storage for Enthusiasts

The cloud is here, but do you really trust all your CD, DVD, and Blu-ray rips and personal photos and videos to the cloud? This blog post is about home storage options, mostly focusing on Linux-based appliances and small servers. Windows Home Server (WHS) is also a rather cool technology, although its future and forward migration path is by no means certain. Being a Linux guy, that's where I'll focus.

For those who don't want to fiddle, I highly recommend a home NAS appliance. I have a Synology DS1511+, although there are other great options as well. Be prepared to spend some money for the sake of convenience. This isn't as cheap and dirty as adding drives and controllers to an old PC. +Piaw Na reminds us to consider what happens when your NAS appliance fails. Many NAS vendors use proprietary storage mechanisms that allow mixing disks of various sizes. Consider a NAS configuration that supports stock Linux software RAID and ext3/4 filesystems. You or your local Linux guru can then access your data if your NAS appliance fails.

Cheap and dirty is kind of where I'd like to focus. This means x86 hardware, Linux, near-line (or prosumer) SATA drives, and software-based data redundancy. Proper RAID disk controllers have battery backup for recovery after power failure and are prohibitively expensive for home use. Software RAID5 may not achieve trustworthy performance. The first rule of cheap and dirty SATA drives is that manufacturers will cheat to obtain better benchmark performance. Drives will cache writes, violate commands to sync ordered writes to disk, and fail very ungracefully on system or power failure. Your drives are optimizing for the common case where nothing ever fails. That's well and good until something fails...

Here are some tips for building that Linux-based file server. Thanks to +Yan-Fa Li for his additional pointers and reminders.

  1. Consider systems that support ECC (error correcting) memory. Some consumer AMD processors and boards used to support this. Intel Xeon systems generally support ECC, as do Core i3-32XX processors on certain motherboards. Data and filesystem metadata that is corrupted in memory never makes it onto your disks correctly.
  2. Disable write caching on your disk drives.
    hdparm -W0 /dev/sdX
    You'll need to automate this to run at boot time.
  3. Exercise your drives before adding them to your storage pool.
    badblocks -v -w -s -c 1024 /dev/sdX
    smartctl -t long /dev/sdX

    Drives seem to follow a reliability bathtub curve. New and old drives seem more prone to failure. Check your drives before relying upon them.
  4. Consider your recovery requirements. What data can you afford to lose? What data can you reconstruct (re-rip optical media, re-download, etc.)? What data can suffer some temporary (hours, days) unavailability? What data must always be available?
  5. Enable ERC/TLER error recovery timeouts where possible when using multi-drive arrays. Consider near-line storage quality drives or better (compared to consumer drives) when building your storage arrays. The current WD Red series drives are practically aimed at the prosumer and small business mass storage markets.
  6. Remember that RAID0 isn't RAID; it's not redundant. RAID0 is only for transient or reconstructable data.
  7. RAID1 and variants (10, 1E) are a great choice when you can afford the drives and loss of capacity. Performance and reliability can be quite good. You're throwing drives at the problem and reaping the rewards.
  8. Software RAID5 is scarier than you might think. Data and reconstruction information is scattered across all drives in the array. How much do you trust your drives, controllers, and system/power stability to keep this all in sync? Putting most filesystems atop an unreliable RAID5 is a recipe for disaster. Battery-backed hardware RAID5/6 has its place. I'm reasonably convinced that software RAID5/6 doesn't. Beware the write hole.
  9. ZFS is cool on Solaris and FreeBSD. It's now even cooler with ZFS on Linux. ZFS RAID-Z can be a reasonable and reliable software replacement for hardware RAID5/6. You're not going to see blazing speeds, but you're getting end-to-end checksumming. If you want blazing speeds, get an SSD. +Yan-Fa Li mentioned that he gets 300MiB/s from his 6 drive ZFS setup, enough to saturate gigabit Ethernet. Maybe leave the competition to the folks over on [H]ard|OCP and consider your specific use cases.
  10. Btrfs is the next great Linux filesystem that somehow never arrives. If ZFS was licensed for inclusion in the Linux kernel proper, btrfs might just fade away. I've used both, and btrfs doesn't even seem to aspire to be as good as ZFS already is. Sorry!
  11. Runtime redundancy is no substitute for backups. What happens if an entire storage system is lost or fried? Consider maintaining a limited backup storage system on site and copying treasured data to the cloud. Companies historically store offsite backups in case of disaster. Cloud storage can provide offsite backup insurance for the rest of us.
These tips are something of a work in progress as I build my home storage array and backup. Follow the discussion on Google+.