Sunday, April 7, 2013

ZFS Storage Conversion Postmortem

Until recently, my home data storage "solution" consisted of a single PC running Arch Linux and 3x largish SATA drives running JFS in a JBOD. The Linux box also runs my MythTV backend, and the whole setup evolved a bit organically. I'm willing to lose the TV recordings if something fails. CD and DVD rips, photos, and documents are replicated between the Linux server, Windows desktop, and Mac portable in a very manual and haphazard way. I've been lucky not to lose anything important.

With my (personal) office equipment now moved back home from my former telework space, it's time to get my data better organized and replicated. I'm documenting my specific hardware and software configuration here, mostly as a log for future me. There may be generally useful information hidden among the specifics; if so, I apologize in advance for making you sift for it.

Newly available equipment:

  • 9x 1TB WD Black drives purchased between late-'08 and early-'10 for an aborted storage build
  • 2x 3TB WD Red drives purchased a couple months ago
  • Synology DS1511+ 5x bay NAS chassis
  • Dell PowerEdge T610 server that had been used for work-related virtualization

The WD Blacks got some use in the NAS and as temporary backup drives; some just sat in a drawer for a couple years. :-/

The Dell T610 is total overkill for most of what I do now.

  • 2x Intel Xeon E5649 processors at 2.53GHz, yielding 12 cores
  • 48GB (6x 8GB) ECC DDR3 RDIMMs
  • 2x Broadcom NetXtreme II GigE NICs
  • 2x Intel ET GigE NICs
  • SAS 6/iR (LSI 1068e) storage controller
  • 2x 250GB Samsung drives in a hardware RAID1 system mirror
  • 6x drive blanks; Dell likes to sell high-margin drives with their caddies
So I wanted to turn the Dell into a backup storage and development virtualization box that is used mostly on an as-needed basis. I went into this plan without much of a sense of what system power usage would be, just that it would be high compared to my other systems.

Dell T610 disrobed w/ caddies and drives
The Dell was initially running Debian 6. My actual work went something like this:

  1. Take the faceplate off the Dell to discover 6x drive blanks. Um, yay.
  2. Discover that drives will function without caddies as long as I don't jostle anything.
  3. Try 3TB WD Red in the Dell only to discover the controller maxes out at 2TB.
  4. Decide to add 6x 1TB WD Blacks to the Dell, since that should work.
  5. 3x of those WD Blacks were running a RAID5 in my Synology NAS.
  6. Place order for 8x cheap drive caddies before things go much further.
  7. Start rsync backup of Synology NAS to my old MythTV server.
  8. Start serious burn-in test of 3x unused WD Blacks in the Dell.
  9. Dissolve RAID and yank 3x WD Blacks from the NAS after rsync completes.
  10. Hey, those caddies got here fast!
  11. Install 2x 3TB WD Reds in the NAS as a pure Linux RAID1.
  12. Start rsync restore from the MythTV server to the Synology NAS.
  13. Get 6x WD Blacks into Dell caddies and installed.
  14. Start less serious burn-in of the 3x WD Blacks that had been in the NAS.
  15. Install Debian Wheezy onto the Dell system drive mirror /dev/sdg.
  16. Broadcom bnx2 requires non-free firmware; switch to Intel NIC for install.
  17. Figure out how to make grub write the boot record to sdg not sda.
  18. Install modules and tools from ZFS on Linux; that was easy!
  19. Struggle to zero out the WD Blacks w/ mdadm, dd, and parted.
  20. Create a RAID-Z1 pool with 5x WD Blacks and one warm spare.
  21. Install cpufrequtils to reduce system power usage.
  22. Begin testing the hell out of the ZFS pool.
Idle power usage with 8x drives is 173W. I still want to abuse ZFS a bit more by running through simulated failure scenarios. I've used ZFS on Solaris and FreeBSD, but never on Linux before this. So far so good. There's still work to do setting up automated backups between the Dell and the NAS and generally getting my data more organized. At least there's now a framework in which that can happen.

No comments:

Post a Comment