Sunday, September 29, 2013

BeagleBone Black Exploration

The BeagleBone Black is an inexpensive ($45 US), ARM processor based hobbyist/developer board that runs Linux. This board is similar to the better known Raspberry Pi, enough so that's it's hard not to compare them. The BBB employs the embedded control oriented Texas Instruments AM3359 Sitara SoC, while the RPi employs the video oriented Broadcom BCM2835. I have both boards, so I'll definitely be comparing them.

I ordered the full BeagleBone Black (Rev A5C) starter kit from Adafruit including the 5V power adapter, prototyping breadboard and backplate and jumper wires, and "cape" PCB. I also recommend a microSD card to boot/install alternative Linux OS distros and a microHDMI cable/adapter and USB keyboard for debugging networking configuration.

BeagleBone Black and ChronoDot RTC on Prototyping Breadboard

The BBB comes standard with Ångström Linux on its onboard 2GB eMMC flash storage. Nod to the BBB over the RPi here. The Raspberry Pi requires a prepared SD flash card to get started.

For my purposes, I immediately loaded Arch Linux ARM onto a microSD card, booted from SD, and installed Arch onto the eMMC. I have to give a lot of credit to the developer(s) who produced the Arch image: it's nicely pre-configured to bring up wired Ethernet and SSH using DHCP. Arch thus allows a completely headless install and configuration. Alas, I messed up static IP configuration and had to attach a display and keyboard to work through it.

During the installation process, I felt motivated to benchmark both the eMMC and my microSD card (ADATA 32GB UHS-I).

Device Read
MB/s
Write
MB/s
2GB eMMC 21.2 3.9
32GB μSD (BBB) 4.3 1.7
32GB μSD (PC USB2) 19.6 19.2

The SD interface is incredibly slow. The same card performs tremendously better on my PC monitor's USB2 adapter. I wouldn't want to run the OS from SD anyway, as reliability is always questionable. The microSD interface is clearly intended primarily as a means to load alternative distros onto the eMMC. Even using an SD card for a data/media filesystem takes a bit of effort.

Once running from the eMMC, the BeagleBone Black feels subjectively quick, certainly quicker than the Raspberry Pi. This is no surprise. The TI Sitara contains a more recent, higher-clocked, superscalar processor core with DDR3 RAM support.

My intended use for the BBB is as an internal network services (DHCP, DNS, NTP) and home automation controller. Like the RPi, the BBB doesn't have a battery backed real-time clock (RTC). I easily added one via the ChronoDot RTC and Lemoneer's excellent guide. The BBB has no lack of I/O and should be awesome for monitoring and control. There are even two dedicated PRU cores for true real-time control. I haven't explored this yet, but the PRUs may eliminate much of the need to attach Arduino microcontrollers a la the Raspberry Pi.

For my purposes, the BBB looks to be solidly better than the RPi. That said, the RPi still has some key advantages. There is a huge Raspberry Pi installed base and community. The RPi camera module is appealing. The RPi is decidedly superior for graphics and especially video tasks like XBMC. At these prices, there's something to be said for having one (or more) of each.

Sunday, September 1, 2013

A Night and a Day on Backup Power

Power Management for Home Computing and Networking


In this installment, I discuss strategies by which computing enthusiasts may reduce power and retain or improve availability of home network services.

TL;DR TBD ;-)

Low Power Computing


There's never been a better time for low power computing. A lot of us are optimizing electricity use, not just because power is expensive, but because we've become mindful of waste. Smartphones, tablets, and ultrabooks use a lot less power than desktop PC setups. Mobile devices train us to be aware of power use. We can't always plug in, and it's inconvenient to plug in, so we optimize and prioritize around battery life. For those of us geeks who run home servers, more networking than a single WiFi router, whole house DVR backends, and home automation, it may be time to apply our power management skills to the home. I have some additional motivation: disruptive power outages that take my home off-grid, ranging from momentarily to minutes, hours, or several days.

On-Demand Battery Backup


I've owned a number of small UPSes from APC and CyberPower. These are a great help for those whose power is less than reliable. For many, this is all the power protection they need. Check out the recent "line-interactive" models with "active PFC". These are typically the best match for modern electronic devices where UPS cost is an issue.

If your equipment is mission-critical, especially sensitive, or you need to run on dirty power (as from a generator), you should be considering a "double conversion on-line" UPS. These units rectify current to charge batteries and feed an inverter that produces stable AC output. On-line UPSes are more expensive and less efficient than their line-interactive counterparts, but they serve their niche.

Generating Electric Power


If you live in remote or disaster-prone area, you may be considering the ability to generate your own power. You may be considering a photovoltaic solar installation even if you live in a center of civilization. Make no mistake, generating reliable AC power is a challenge. Some small or remote installations may be better off sticking to DC power and equipment. I won't write more about DC here, but may blog about in the future if I get around to implementing my dream DC system.

Typically, generating your own power means getting a generator setup that runs on fossil or bio-fuel. Good generators are expensive. Around here, some people spring for whole-house standby generators with automatic control systems and transfer switches. These setups usually run on natural gas or propane and cost $10K USD or more. Many more people have smaller "portable" generators that they backfeed into house power, a manual transfer switch, or just replug equipment into the generator itself. If backfeeding, it's critical to turn main line power off at or before the breaker/fuse box.

My generator is a 6500W (max) tri-fuel (gasoline, natural gas, or propane) unit that cost about $2.5K. It's got electric start, but I need to manually engage the key switch, flip switches in the basement, etc. to bring it on line. The generated AC power isn't very clean. I've seen frequency range from 60-64Hz and voltage from 115-130V. If I had an oscilloscope, I could see the ugliness of the "simulated sine wave" output. AC motors will sometimes vibrate on generator, lights flicker, A/V equipment exhibit audio noise, and cheap UPSes beep and refuse to charge their batteries or pass generator power through.

Real UPSes That Work


I can deal with most of the downsides to dirty, generated power. It's temporary; most things work, just not optimally. UPSes that discharge during the initial outage then refuse to charge or pass generator power are a huge annoyance. You'd have to shutdown, unplug, and replug each piece of equipment on each UPS to switch to generator power, then repeat the process to switch back to the UPS! This somewhat defeats the purpose of having a UPS in the first place. It took me a few years to take the hint and solve the problem.

My solution was to purchase a double conversion UPS with extended runtime battery pack. My critical equipment plugs into this UPS and will continue to run on generator power. Non-critical equipment that would suffer from unexpected outages goes on cheap, line-interactive UPSes. This non-critical equipment either survives short outages or stays off until line power returns.
I went with Tripp Lite instead of APC because the equivalent (SURTA series) APC UPS was literally $400+ more.

Data for a Case Study


So now that you've heard the rambling details of why and how I addressed my problem, here are the specifics from recent testing. This takes us to the title of this lengthy post. I spent the night with the UPS unplugged to see how long it would run my critical equipment. Much of the next day was then spent on generator to 1) ensure batteries would charge and equipment run, and 2) determine how long it would take batteries to charge from 10% to 100%.

Equipment consists of:
  • Comcast Business Class network gateway
  • D-Link DIR-825 running as dedicated firewall/router
  • Asus RT-N66U running as WiFi access point and network switch
  • Raspberry Pi running critical network services and monitoring
  • Thin Mini-ITX Intel "Ivy Bridge" PC running as DVR backend
  • HDHomeRun network-attached dual digital TV tuner
I don't have exact power consumption figures all devices individually or all devices together. The UPS won't guess loads under 100W, so I know this is less than that. My back-of-envelope guesstimate is an average draw of ~60W. The runtime figures bear this out.

The equipment ran for just over 12 hours on battery. I was hoping for closer to 16 hours, but there are optimizations left to try. The UPS batteries charged well from generator while equipment continued to run. During the just over 5 hours it took the batteries to charge from 10% to 100%, there was one 19 second interval where input power went out of spec. The generator kicked up to 64Hz and 130V during this time and engine RPM and generator whine were obviously increased. I have no idea why. Anyway, I'm happy enough with these results.

Power Optimization Tips and Tricks


So, how to get from 12 hours to my goal of ~16 hours on battery? Tripp Lite has a nice calculator. Basically, I need to stay closer to 40W than 60W. To PC gamers that probably sounds impossibly low. Well, good thing I'm not gaming on batteries!

Most obviously, functions can sometimes be consolidated onto a single device. For instance, I'm running a separate firewall/router and WiFi access point. Most home users don't do that. Ah, but there's a method to my madness! During some extended power outages cable internet will go out; not always at first, but eventually power backup goes out at the cable headend too. In these situations I want to be able to "shed" the load of my Comcast gateway and firewall/router. And sometimes I just want to be able to power cycle them without taking down my internal network.

So separating and assigning related services that fail together into power/availability groups is sometimes useful. I have an "internet connectivity" group and a "home networking" group. Now, how to lower overall power use?

My PC-based DVR backend and TV tuners pull a decent amount of power. Yet when it comes down to it, DVRs only need to be powered up when they're being used.

Wake-on-LAN/Timer to the Rescue


Some PC-based equipment simply doesn't need to be fully powered up at all times. Modern PCs can generally be put to sleep (suspended to RAM for reduced power draw and quick resume) or hibernated (suspended to disk for no power draw and slower resume). Failing that, systems can be shutdown and booted back up when needed. This is all possible because modern PCs can keep a trickle of power going to a clock and system/network controllers while the PC is still plugged in.

Think of this like your TV. When the TV is off, it's really in "standby". A sensor and controller are powered and waiting to detect your press of the "on" button on the TV remote.

Let's go back to my PC-based DVR example. If this system drew a lot of power, it might make sense to keep it powered off until it's needed to record or stream content. Since my particular system only pulls 15-45W, it's more convenient to keep it on and immediately available. That equation changes when I'm running from battery. This calls for a little automation:
  1. When line power is available, power on, stay on, and run normally. Done.
  2. When running from battery and the system is idle, set the wakeup timer to just before the next scheduled event. Go to sleep or hibernate.
  3. When the pre-set timer goes off, wake up if sleeping or hibernating.
  4. When receiving a magic network packet, wake up if sleeping or hibernating.
The "magic packet" allows another computer or device to wake or boot an otherwise unavailable system from a standby state on demand. There are a number of utilities available to send magic packets through a manual UI or scripted interface. Some systems also support the ability to wake on keyboard/mouse activity or selective USB activity. These are very cool and underutilized features outside the world of portable PCs.

Power Control Hacks


I also mentioned that I'm running a network-attached digital TV tuner. Logically, this is part of a "DVR" power/availability group. The tuner box only needs to be powered when the DVR backend is recording or streaming live TV. Unfortunately, the tuner box doesn't support power management. It's either plugged in and on, or unplugged and off. Some devices are like this.

A little research shows that the HDHomeRun tuner runs on 5V and under 2A. My PC-based DVR already contains cables that can supply the proper voltage and current! Let's say that I hacked a SATA power cable to provide 5V at 2A to a barrel connector that could be plugged into the tuner. Now (in theory) I have a tuner that can power off and on as the PC sleeps and wakes.

Software/Firmware Power Tuning


Intel, AMD, and the various ARM vendors have gotten pretty good at making the hardware and firmware recognize demand and selectively power up/down functional units and communications buses. This used to be done with software (if at all). Still, there are sometimes power saving drivers to install or driver options to tweak to save a little juice. A blunter approach is to disable (in the BIOS/firmware) or remove devices that aren't needed. Underclocking and/or undervolting CPUs, GPUs, or RAM is also an option.

Consolidation and Virtualization


As I stated before, sometimes combining and consolidation is detrimental to sensible grouping and availability of services. Oftentimes, consolidation is brilliantly appropriate. System and container virtualization can make consolidation even more applicable. Let's say I want to add a software PBX like Asterisk to my mix of services. I could easily install this on the DVR system, preferably under virtualization to isolate it from the DVR system image. The PBX would go up and down as the DVR system wakes and sleeps during outages. That said, my voice-over-IP (VoIP) provider already forwards calls to my cell phone when my PBX is down. This makes the PBX a useful but not critical service. And there are potentially ways to proxy voice service and wake the PBX system on incoming calls.

Roll Credits


If you made it this far, I admire your fortitude. Believe me, I wasn't planning to write such a voluminous tome. As always, questions and comments are welcome via Google+.

Monday, July 29, 2013

Repurposing the HTPC

Several month ago, I built a slim form factor Windows 8 PC to act as a media streaming frontend. Notes here:
http://travesh.blogspot.com/2013/03/super-slim-intel-htpc-build.html
http://travesh.blogspot.com/2013/03/super-slim-htpc-followup.html

Total overkill as an HTPC for everything but games, yet I hadn't found a better solution to stream MythTV recordings, Netflix, Amazon Instant Video, etc. to single frontend. Android solutions were flakey or slow. Netflix uses Microsoft Silverlight, so good luck getting that on Linux. After a while, I added Aereo and Plex to my streaming requirements. Windows 8 on a small PC can do all of this, but it can't do it like an appliance with a unified UI. I bought a Roku 3 and the PC sat for a bit.

Having torn down my big MythTV backend when the summer TV doldrums went into effect, I'd been starting to feel the pressure to put MythTV recording back into service. Aereo's cloud DVR has mostly worked for me, but not 100%. Fundamentally, their service is a legal and technological hack; I'll enjoy it for now, but can't totally trust it.

So the now unused PC frontend has become the new DVR backend. The SDD got replaced with a 500GB 2.5" 7200rpm drive for video storage and Windows 8 got wiped in favor of Mythbuntu. My goal this time around is low power recording. When I lose electric service (as I often do), I want my DVR to keep recording on UPS and/or generator. There's also just the cool factor of the low power x86 and ARM trend. Everyone is starting to care at least a little about heat and power.

All said and done, the new DVR backend idles at 16W. That's insanely good! And this is Ivy Bridge, so who needs Haswell? And especially, who needs Atom? My old Atom 330 always idled at about 35W. The later Mini-ITX Atom boards were better, but never great. ARM isn't quite there as a DVR backend; disks over USB, 10/100 Ethernet on most boards, weak transcoding speed and support. I was planning this elaborate sleep/wake configuration for the Myth backend, but I might not bother. Always-on is just easier.

I'm planning my next blog post about separating the DVR recording backend from transcoding, archival storage, and streaming. The slim form factor Mini-ITX Sandy, Ivy, and Haswell get a huge thumbs up from me as a near ideal solution for DVR recording. And at 16W idle, you may want to run some other always-on services from this system as well. Power tuning is still in progress. How low can we go?

Saturday, June 1, 2013

Developments in Home Streaming

After 10+ years of building and running home theater PC boxes (HTPCs), cheap streaming appliance frontends may finally be good enough to present a variety of local and Internet content on 1080p displays. This blog post provides a brief summary of the hardware and software solutions I've recently abandoned, and some suggestions and hints toward what might approach video Shrangri-La.

In mid-2012 I discontinued use of my TiVo HD in favor of a MythTV setup with HDHomeRun network attached tuners. For nearly a decade, I'd run TiVo series 1 and 3 (HD) boxes for my cable recording and living room display needs. The old TiVo was a life-changer. The original TiVo team built a brilliantly usable interface to fundamentally complex functionality. Unfortunately, TiVo has become something like the Palm Computing of DVRs. The current devices aren't that much more advanced than the original devices in terms of actual interface and practicality. It's like TiVo laid off a brilliant team of engineers and failed to foster partnerships in the industry. When my local cable provider switched to digital-only, I ditched the TiVo rather than fighting with CableCARD and re-committing to TiVo. I will not support the cable industry's lockdown of content by turning every DVR or cable box into an additional source of revenue for the cable monopoly. This is now the age of Ethernet and Internet streaming. I refuse to acknowledge my cable provider as anything but an Internet pipe and source of local broadcast programming. I realize that many get the extended or premium cable TV packages. I'm done with that. I'll get my HBO and Showtime series over DVDs or streaming services long after the programs initially air. Considering most network's proclivities to cancel new programs, I'm willing to wait a couple years for those programs that pass the gauntlet, and receive them at a time-discounted price.

I've run local streaming server and client HTPCs for years. PCs, especially Linux and open source PCs, have offered the freedom to store and stream personally owned/managed content for years. Just as the TiVo was my DVR, my HTPCs happily served up my personal AV archive and DVDs to any display in the house.

This year, I've experimented with devices like the MiniX Neo X5 (Chinese, Android-based TV box) to act as cheap clients to attach to HDMI displays. I also built a new Windows 8 HTPC to access XBMC, MythTV, and streamed Netflix and Amazon content: http://travesh.blogspot.com/2013/03/super-slim-intel-htpc-build.html. In the end, neither Android nor Windows boxes are what I want attached to my displays. I want set-top boxes that work like appliances and have custom hardware and Linux underneath. I don't want to futz with custom Android firmwares and the intricacies of hardware accelerated playback, nor the complexities of a full Windows box with an awkward combination of Metro apps and some (annoyingly) browser accessed content. What I want, and what I suspect many want, is something akin to the old TiVo appliance that can seamlessly access both local media and current Internet streaming services. Nothing else will do. This past week, I repurposed my long-serving AMV/Nvidia (MythTV) HTPC to be a network firewall/router and bastion host. I expect to repurpose my new Windows 8 HTPC as a network server this coming week. PCs are finally becoming overkill for driving 1080p displays; they're still too much like PCs and not enough like appliances.

My current preference for driving displays is the Roku 3. The works like an appliance for the sources I use: Netflix, Amazon, Aereo, and Plex. My Synology box now acts as a very adequate Plex server for my own content, and can be integrated to work with my MythTV backend. Netflix, Amazon, and Aereo can stream at 720p or 1080i/p. The Roku has a great appliance interface and a great remote control. I love being able to attach headphones to the Wi-Fi remote control itself for quiet late-night viewing.

In the end, here are my recommendations: A Roku 3 for each dumb display, a MythTV backend for cable-based content, a Plex server for your personal media archive, and portable devices to shore up any other areas. Aereo may be here for the long haul, or may be a flash in the pan. MythTV and ClearQAM or CableCARD also have an uncertain future. Transcode or pop your archive and TV content onto a Plex server to stand the test of time. The world of broadcast and streamed video is changing rapidly. I generally trust the Roku (version 3) and Plex to stand the test of time. This may change. ;-/. I'll be right there to inform as changes occur. As always, I'd love to hear on G+ what's currently working for everyone else or what direction you're heading. Cable and Internet video is always a moving target. This will presumably settle down at some point, but we're in a period of disruption and business and legal change. I simply hope you AV enthusiasts out there are navigating a path that works for your and your family! I'd love to hear your chosen path(s). Thanks for listening during these unsettled times!

Tuesday, May 28, 2013

Old New England Homes

[Veering off my normal technical-only discussion today due to impending hot weather forecast for my part of the world later this week.]

My home in New Hampshire was built around 1850 to replace the run-down old garrison just to the north. The story is that some salvaged materials (bricks, timbers, boards?) from the old garrison were used in the construction of the "new" farmhouse. The garrisons were fortified to repel Indian attacks and abductions in the area into the mid-1700s. My family roots in this area run deep, and numerous ancestors were killed or abducted, especially around the time of the Dover, NH Massacre of 1689. Abductees were typically taken north to Quebec and held for ransom. One of my ancestors was a "heroine" of the Dover Massacre; another declined to return to her abusive husband after abduction to Quebec. There's a lot of history here. Of course, documented historical roots don't run as deep here in the New World as in Eurasia. I recall staying at an old, old inn in Wales in 2006 where one corner of the room was obviously several inches higher than the other.

Like many old New England homes, my home has a borning room. This is a small room just off the kitchen/hearth where mothers would give birth, infants would be tended, and the sick and elderly would be provided solace. The hearth was the living center of the old New England homestead. This makes special sense given the history of the Little Ice Age in North America. My home, although not especially large for the time, has a vaulted brick arch in the basement to support the dutch oven and fireplaces on the first floor. My paternal grandparents acquired this home in a somewhat run-down and unimproved state (no electricity, plumbing) in the early 1940s. My 3rd great grandfather had lived in this same home when it was relatively new. Homes "in town" during the same period would have had better amenities, higher ceilings, grander staircases, etc. compared to rural farmhouses. On the plus side, I have almost 500 acres of conservation land in my back yard.

For the past couple years I've been sleeping in the small borning room off the long, farm-style kitchen. This room has space for a small bed against three walls, a nightstand and dresser. It also happens to be cold in the winter and hot in the summer due to a southeast location and little insulation. Many new homes aren't built to last, but at least they're using a sensible layout and insulation. Surviving old homes were built with quality materials, yet impose compromises for modern living. With a warm weather forecast for later this week, I've decided to change to the upstairs bedroom. I've ordered a small window air conditioner to hopefully arrive and be installed before hot weather strikes on Thursday.

When my grandparents acquired the house, the upstairs was largely unfinished. My grandmother had most of the upstairs built out as a separate apartment some years after my grandfather passed away. The current tenant has been there for almost 30 years. I retain one bedroom at the top of the (steep) front stairs with no upstairs bathroom access. This certainly isn't the house as I want it to be. That said, I have a great tenant and will be gradually making practical improvements. For now, I need to figure what should be moved to the upstairs bedroom to make it habitable. I have a bed, lamp, and nice closet but desperately need a bedside stand and dresser. Longer term, I'd love to figure out either a fix to the steepness of the stairs or a separate upstairs bathroom situation. It's all possible, but starts to run into real money. At some point, I run the cost-benefit analysis as to whether it's better to buy/build the home I want and force renters to deal with the quirks of the old farmhouse.

Sunday, May 26, 2013

Cutting the Cable?

In this installment, I'll discuss the gradual progression that's been driving my brand of AV enthusiast away from the cable TV monopolies and onto the Internet. I'll also focus on Aereo's local broadcast DVR service in both a mini-review and in the larger landscape.

The more you tighten your grip, the more star systems will slip through your fingers.
- Princess Leia

This quote often springs to my (admittedly twisted) mind when discussing the lengths to which media distributors and cable monopolies such as Comcast, Time Warner, etc. will go to maintain control in a world of cheap recordable media and broadband Internet service. On the Internet, everything is just (potentially encrypted, unrecognizable) data. The media genie has long escaped the bottle. Somehow the RIAA and music distributors (MB/song) got the message, but the MPAA and cable companies (GB/video) are still refusing to give ground. I recall being young and poor and not wanting to pay for intellectual property I couldn't afford anyway; this is still the third world argument for rampant IP piracy. It's not a terrible argument. If duplication/distribution costs are negligible and legal distributors won't operate in your region, why should distributors be able to restrict your access to information and entertainment? Is infotainment for the producers or the audience? It takes both!

I now have the money to legally access the media I desire and the value system to resent overly restrictive controls and obsolete business models propped up by bad laws. Here is my manifesto.
  1. I will pay for the content I want if the terms are at all reasonable. I have bought (sometimes rebought) much of my digital music on iTunes and Amazon. I pay for Netflix and Amazon streaming.
  2. I am reluctant to pay a one-time fee to "own" or stream a piece of DRM restricted content.
  3. If the content owner will not sell into a region, I do not believe piracy in that region is wrong. Information wants to be free.
  4. I will not pay dozens of monthly fees to get the content I want. There is absolutely a place for content aggregators and streaming services.
  5. Whenever one distributor/studio "takes their toys and goes home" by establishing a separate service for their content, I will boycott that distributor. You chose to be difficult; there are other fish in the sea.
  6. I will not step back from the functionality of a TiVo to skip commercials or replay content. I will not pay money for locked-down content.
Wouldn't it be great if the industry could just work out a reasonable plan for both DRM-free video downloads and cross-platform video streaming? Kudos to Apple for absolutely forcing this on the music distributors back in the day and actually saving their revenue streams in a changing world.

Then there's the broadcast networks and cable companies still trying to eek out an existence from advertising, cable transmission fees, and monopolies and legal bullying. Since there's apparently no reasoning with all the broadcast networks at once unless you're a cable monopoly, we now have companies like Aereo. In case you haven't followed the drama, Aereo is trying to work within the law to provide each subscriber with their own tiny TV antenna and DVR streaming service in their local broadcast area. Cloud-based DVR streaming already has legal precedent behind it. Aereo's service is especially useful for those viewers on the outskirts of the metro broadcast area (like me) or with physical obstacles in the signal path (like me) or with intractable landlords. Aereo is specifically not for people who wish to receive extended or pay cable channels; it's for cable cutters for whom a physical antenna is difficult or impossible. You're paying Aereo a monthly fee to host your antenna and DVR. There's some interesting technology behind this, but the legality seems fairly straightforward. The lawsuits are flying and Aereo has generally been prevailing. Fox and CBS have threatened to pull their broadcast stations if Aereo ultimately prevails.

If Aereo is able to grow quickly enough to establish a significant subscriber base, they may be able to negotiate with the networks just as the cable monopolies do now. This is certainly my hope. This would also allow Aereo to reduce their resource requirements by storing a single (replicated for redundancy and performance) stream for each provider with whom they have an agreement. It's almost a given that broadcast TV will go away in the next decade, freeing up the spectrum for other uses. The FCC will need to change the regulatory landscape to provide some degree of free access to public news and alerts on TVoIP.

If I haven't mentioned much about the Aereo service itself, it's because it pretty much works as advertised. I'm on the $12/mo plan that provides two simultaneous channels and 60GB of DVR space. I'd love to know more of the technical details behind how they make their system work. There's apparently some serious transcoding going on at recording/viewing time. They're almost certainly not doing data deduplication for video storage; this would add a legal gray area, and they're absolutely trying to stay legal. I've tried viewing both on a laptop screen and across a room on a TV. Close viewing shows very obvious macroblocking artifacts, visible interlacing on some content, and horizontal line artifacts where 1080i content is being naively downscaled to 720p. At TV viewing distances, most issues become "good enough" for this non-videophile. Lack of macroblock dithering/blending for large areas of a similar colors are still visible. It may be possible to eliminate much of this by properly calibrating display brightness levels. I have not observed frame skipping or lengthy buffering problems.

Aereo is presenting a somewhat specialized solution. If you're tied to cable Internet, bundled basic cable may be a better deal. If you want cable/satellite-only channels, you'll need cable or satellite. I'm no longer willing to put a locked-down, slow, and awful cable box on each of my displays. The cable company has tightened their grip with encrypted digital content and threatened removal of unencrypted broadcast content (ClearQAM) formerly mandated by the FCC. The cable companies are trying to drag us back to the bad old days of the phone company ("Ma Bell") when everyone had to lease each and every phone from the monopoly provider. Grasping behavior like that has me slipping through their fingers.

Sunday, May 19, 2013

Overthinking Your Network

[In this installment, I discuss the overwhelming tyranny of choice faced by the home networking power user who is starting to consider enterprise approaches (VLANs, routing protocols, multi-WAN, redundancy, and failure bypass options) for what is fundamentally a home network.]

Most home Internet users have a DSL or cable modem and a Wi-Fi firewall/router. Power users may run thirdparty router firmware or a dedicated PC firewall/router alongside dynamic DNS registration and port forwarding for a few services. Many power users want to be able to securely connect to services on their home network from the outside. This may be the age of cloud computing, yet it's also the age of increased home automation. There are turnkey home automation systems that use cloud servers for control via mobile apps and browsers, there are DIY automation systems, and there probably won't be standards for home control, monitoring, and security via the Internet for years to come. Secure outside connection to one's home network is pretty much the hallmark of the power user.

I'm currently on a two year Comcast Business Class contract. This gives me five public static IP addresses, not including the Comcast gateway box itself. Mo' IPs, mo' problems. Having these separate external IPs gives me dreams of putting my outgoing home traffic on one, my VoIP PBX on another, a wireless guest network on a third, incoming services on a fourth, and admin/automation on a fifth. It's not like I need all these addresses, but I sure like the idea of them...

Then there's the issue of VLANs. All my network switches are "smart(ish)" and support VLANs:

  • 8 port GigE PoE switch
  • 8 port GigE smart switch
  • 24 port GigE smart switch #1
  • 24 port GigE smart switch #2
VoIP phone jacks will obviously be patched to the PoE switch. These ports will default to the "home networking" VLAN and can recognize phones to put them on a separate VoIP VLAN. I want secure Wi-Fi access to my home networking VLAN and separate unprotected guest Wi-Fi with captive portal. Guests would have to VPN in to the secure home network if necessary. I want some Internet services to be provided from server(s) in a DMZ, which could end up as one or more VLANs.

None of this really gets me to a network architecture yet, but I'm starting to converge on a set of rules or guiding principles:

  1. Expose as few boxes as possible on the public Internet and secure the hell out of those boxes. I'll probably end up with a single firewall/router handling all my public IPs. Providing multiple external points of intrusion to internal networks seems like a bad idea.
  2. Secure administrative access to all your equipment. Imagine the following scenario: An attacker gains access to a DMZ server, scans every possible IP in the local network, identifies switch hardware by MAC address, connects to admin interface of said hardware via default user/password or reconfigures local interface to same network as default admin IP and connects. Security by VLAN: gone.
  3. Use 1:1 NAT and/or PAT and private addressing for everything behind the Internet point of entry. This seems more flexible than a transparent/bridging firewall and could accommodate an external transition to IPv6 while continuing to support legacy IPv4 devices internally.
  4. Prefer VLAN separation to a single DMZ network to prevent one server acting as a point of intrusion from which to launch attacks on other servers. Some switch vendors have features that can provide this isolation without the hassle of VLANs; sadly, I'm not paying for Cisco.
  5. Don't NAT between internal networks. This destroys the audit trail and security of knowing exactly which internal client is connecting to your internal services.
  6. Keep internal and external equipment separate. Let's say your external firewall/router fails. Internal networking and VoIP calls should continue to function. Let's say an internal network switch fails. Guest Wi-Fi access to the Internet should continue to function.
  7. Carefully identify single points of failure and their importance. Have a plan for manual network reconfiguration in degraded mode and/or spare equipment or virtual machine instances to swap in.
  8. Prefer enterprise equipment at reliability choke points. A consumer Wi-Fi router might not be your best choice for your core or sole external firewall/router. Don't let your wired network go down when your Wi-Fi overheats and power cycles.
A home network usually doesn't have the same luxuries as an enterprise network. We're not generally going to keep spare equipment lying around and we don't have rapid response service contracts. Still, there are approaches to ease the pain. Consider buying two smaller switches over a single large one; how many ports do you really need at one time? Consider building virtual machine instances for critical services and be able to spin up and patch networking into VMs when hardware fails.

As always, I welcome criticism and suggestions from real experts. I'm kind of trying to have a dialog with myself about how to build out my own network, but a dialog with others is even better!

Sunday, April 28, 2013

Home Networking for Enthusiasts

As summer approaches, many people are thinking about outside projects and planning for holidays. As an IT geek, I'm planning a more professional and capable home networking setup.

I live in a 160+ year old farmhouse. Electrical wiring upgrades have occurred as needed since the house was first fitted for electricity by my grandparents in the 1940s. When ownership passed to me in 2007, I couldn't even get insurance without upgrading the old fuse boxes (separate upstairs and downstairs apartments) to modern breaker boxes. I'd think they'd be more concerned about old wiring. Thankfully, it's all post-1940.

My MPOE (telco, cable Internet, and power) is in the southwest corner of the basement. The basement itself has a poured concrete floor, mostly reinforced walls (via concrete block) and both granite and old clay brick features. The "ceiling" is mostly less than 6' with fixtures that extend below that level. This is an old "working" basement rather than a finished one. There is a drain at the south wall and up to a half inch of water on the floor during a wet spring. This spring has been quite dry, thus no water on the floor.

High airflow computer power supplies hate humidity. I've stupidly burned through a couple discovering this. Low airflow and fanless systems seem to accumulate enough dry heat to survive. Since the basement is both my MPOE and an otherwise naturally cooled, out of the way space, this seems like the place to put servers and networking equipment. The humidity calls for a basement dehumidifier. Even without a server rack, the mold and deteriorating brick situation likely call for a dehumidifier. I can set humidity at a reasonable level and run collected water into my existing drain.

Whether your "server closet" is a basement, closet, attic or something else, consistent (lowish) temperatures and low humidity are a necessity. I'm somewhat jealous of my Northern California friends; even during the winter wet season, a section of an attached garage probably works just fine as long as heat can be exhausted upwards or out.

I have something of a fetish for rackmount equipment. For many, a simple 12U wall rack would work just fine. For serious geeks, nothing less than 42U will do. I'm somewhere in between, let's say 25U. I have an arch space under my chimney that is centrally located, not far my MPOE, and otherwise unused. We'll see whether I can use that or need to run exhaust heat to an exterior window. It might actually be easier to run exhaust heat up the chimney along with the furnace exhaust.

I started out thinking that my Asus RT-N66U might be capable enough to act as my main firewall/router. Now I'm thinking I need a more elaborate setup:
Cable Internet Gateway => External Firewall/Router => DMZ => Internal Firewall/Router
Equipment is not a problem; I have a ton of old, unused equipment perfectly suited for routing and filtering packets. Cases are a bit of a problem. I strongly prefer rackmount to a bunch of generic PC and consumer appliance cases. I want my rack to be beautiful; call it what it is: a pointless fetish. I demand satisfaction. ;-P

Rackmount cases are seriously expensive (quality 1U => $200 USD), presumably because they're specialized, enterprise equipment. I'm currently looking into repurposing some old 1U Cobalt RaQ3 cases to hold mini-ITX boards. Christian Vogel has kindly shared information about the front panel pinout of these cases on his blog. This is very cool, as I've been wanting to get into Arduino development and interfacing for a while anyway.

As always, feedback here or on G+ is welcome. Let's see some pictures of your setups. How are you managing environmental factors for your home equipment? What are your tips for those who are inclined to be your own networking and server gurus at home? As a side note, I've been gradually moving critical services to the AWS cloud, yet I still need decent networking, security, storage, media servers and playback, and home automation at home. I suspect others are in the same boat. Let's hear about it!

Sunday, April 7, 2013

ZFS Storage Conversion Postmortem

Until recently, my home data storage "solution" consisted of a single PC running Arch Linux and 3x largish SATA drives running JFS in a JBOD. The Linux box also runs my MythTV backend, and the whole setup evolved a bit organically. I'm willing to lose the TV recordings if something fails. CD and DVD rips, photos, and documents are replicated between the Linux server, Windows desktop, and Mac portable in a very manual and haphazard way. I've been lucky not to lose anything important.

With my (personal) office equipment now moved back home from my former telework space, it's time to get my data better organized and replicated. I'm documenting my specific hardware and software configuration here, mostly as a log for future me. There may be generally useful information hidden among the specifics; if so, I apologize in advance for making you sift for it.

Newly available equipment:

  • 9x 1TB WD Black drives purchased between late-'08 and early-'10 for an aborted storage build
  • 2x 3TB WD Red drives purchased a couple months ago
  • Synology DS1511+ 5x bay NAS chassis
  • Dell PowerEdge T610 server that had been used for work-related virtualization

The WD Blacks got some use in the NAS and as temporary backup drives; some just sat in a drawer for a couple years. :-/

The Dell T610 is total overkill for most of what I do now.

  • 2x Intel Xeon E5649 processors at 2.53GHz, yielding 12 cores
  • 48GB (6x 8GB) ECC DDR3 RDIMMs
  • 2x Broadcom NetXtreme II GigE NICs
  • 2x Intel ET GigE NICs
  • SAS 6/iR (LSI 1068e) storage controller
  • 2x 250GB Samsung drives in a hardware RAID1 system mirror
  • 6x drive blanks; Dell likes to sell high-margin drives with their caddies
So I wanted to turn the Dell into a backup storage and development virtualization box that is used mostly on an as-needed basis. I went into this plan without much of a sense of what system power usage would be, just that it would be high compared to my other systems.

Dell T610 disrobed w/ caddies and drives
The Dell was initially running Debian 6. My actual work went something like this:

  1. Take the faceplate off the Dell to discover 6x drive blanks. Um, yay.
  2. Discover that drives will function without caddies as long as I don't jostle anything.
  3. Try 3TB WD Red in the Dell only to discover the controller maxes out at 2TB.
  4. Decide to add 6x 1TB WD Blacks to the Dell, since that should work.
  5. 3x of those WD Blacks were running a RAID5 in my Synology NAS.
  6. Place order for 8x cheap drive caddies before things go much further.
  7. Start rsync backup of Synology NAS to my old MythTV server.
  8. Start serious burn-in test of 3x unused WD Blacks in the Dell.
  9. Dissolve RAID and yank 3x WD Blacks from the NAS after rsync completes.
  10. Hey, those caddies got here fast!
  11. Install 2x 3TB WD Reds in the NAS as a pure Linux RAID1.
  12. Start rsync restore from the MythTV server to the Synology NAS.
  13. Get 6x WD Blacks into Dell caddies and installed.
  14. Start less serious burn-in of the 3x WD Blacks that had been in the NAS.
  15. Install Debian Wheezy onto the Dell system drive mirror /dev/sdg.
  16. Broadcom bnx2 requires non-free firmware; switch to Intel NIC for install.
  17. Figure out how to make grub write the boot record to sdg not sda.
  18. Install modules and tools from ZFS on Linux; that was easy!
  19. Struggle to zero out the WD Blacks w/ mdadm, dd, and parted.
  20. Create a RAID-Z1 pool with 5x WD Blacks and one warm spare.
  21. Install cpufrequtils to reduce system power usage.
  22. Begin testing the hell out of the ZFS pool.
Idle power usage with 8x drives is 173W. I still want to abuse ZFS a bit more by running through simulated failure scenarios. I've used ZFS on Solaris and FreeBSD, but never on Linux before this. So far so good. There's still work to do setting up automated backups between the Dell and the NAS and generally getting my data more organized. At least there's now a framework in which that can happen.

Wednesday, April 3, 2013

Home Storage for Enthusiasts

The cloud is here, but do you really trust all your CD, DVD, and Blu-ray rips and personal photos and videos to the cloud? This blog post is about home storage options, mostly focusing on Linux-based appliances and small servers. Windows Home Server (WHS) is also a rather cool technology, although its future and forward migration path is by no means certain. Being a Linux guy, that's where I'll focus.

For those who don't want to fiddle, I highly recommend a home NAS appliance. I have a Synology DS1511+, although there are other great options as well. Be prepared to spend some money for the sake of convenience. This isn't as cheap and dirty as adding drives and controllers to an old PC. +Piaw Na reminds us to consider what happens when your NAS appliance fails. Many NAS vendors use proprietary storage mechanisms that allow mixing disks of various sizes. Consider a NAS configuration that supports stock Linux software RAID and ext3/4 filesystems. You or your local Linux guru can then access your data if your NAS appliance fails.

Cheap and dirty is kind of where I'd like to focus. This means x86 hardware, Linux, near-line (or prosumer) SATA drives, and software-based data redundancy. Proper RAID disk controllers have battery backup for recovery after power failure and are prohibitively expensive for home use. Software RAID5 may not achieve trustworthy performance. The first rule of cheap and dirty SATA drives is that manufacturers will cheat to obtain better benchmark performance. Drives will cache writes, violate commands to sync ordered writes to disk, and fail very ungracefully on system or power failure. Your drives are optimizing for the common case where nothing ever fails. That's well and good until something fails...

Here are some tips for building that Linux-based file server. Thanks to +Yan-Fa Li for his additional pointers and reminders.

  1. Consider systems that support ECC (error correcting) memory. Some consumer AMD processors and boards used to support this. Intel Xeon systems generally support ECC, as do Core i3-32XX processors on certain motherboards. Data and filesystem metadata that is corrupted in memory never makes it onto your disks correctly.
  2. Disable write caching on your disk drives.
    hdparm -W0 /dev/sdX
    You'll need to automate this to run at boot time.
  3. Exercise your drives before adding them to your storage pool.
    badblocks -v -w -s -c 1024 /dev/sdX
    smartctl -t long /dev/sdX

    Drives seem to follow a reliability bathtub curve. New and old drives seem more prone to failure. Check your drives before relying upon them.
  4. Consider your recovery requirements. What data can you afford to lose? What data can you reconstruct (re-rip optical media, re-download, etc.)? What data can suffer some temporary (hours, days) unavailability? What data must always be available?
  5. Enable ERC/TLER error recovery timeouts where possible when using multi-drive arrays. Consider near-line storage quality drives or better (compared to consumer drives) when building your storage arrays. The current WD Red series drives are practically aimed at the prosumer and small business mass storage markets.
  6. Remember that RAID0 isn't RAID; it's not redundant. RAID0 is only for transient or reconstructable data.
  7. RAID1 and variants (10, 1E) are a great choice when you can afford the drives and loss of capacity. Performance and reliability can be quite good. You're throwing drives at the problem and reaping the rewards.
  8. Software RAID5 is scarier than you might think. Data and reconstruction information is scattered across all drives in the array. How much do you trust your drives, controllers, and system/power stability to keep this all in sync? Putting most filesystems atop an unreliable RAID5 is a recipe for disaster. Battery-backed hardware RAID5/6 has its place. I'm reasonably convinced that software RAID5/6 doesn't. Beware the write hole.
  9. ZFS is cool on Solaris and FreeBSD. It's now even cooler with ZFS on Linux. ZFS RAID-Z can be a reasonable and reliable software replacement for hardware RAID5/6. You're not going to see blazing speeds, but you're getting end-to-end checksumming. If you want blazing speeds, get an SSD. +Yan-Fa Li mentioned that he gets 300MiB/s from his 6 drive ZFS setup, enough to saturate gigabit Ethernet. Maybe leave the competition to the folks over on [H]ard|OCP and consider your specific use cases.
  10. Btrfs is the next great Linux filesystem that somehow never arrives. If ZFS was licensed for inclusion in the Linux kernel proper, btrfs might just fade away. I've used both, and btrfs doesn't even seem to aspire to be as good as ZFS already is. Sorry!
  11. Runtime redundancy is no substitute for backups. What happens if an entire storage system is lost or fried? Consider maintaining a limited backup storage system on site and copying treasured data to the cloud. Companies historically store offsite backups in case of disaster. Cloud storage can provide offsite backup insurance for the rest of us.
These tips are something of a work in progress as I build my home storage array and backup. Follow the discussion on Google+.

Saturday, March 30, 2013

Teleworking and Small Office Internet

Today marks exactly one year since I left my job with Azul Systems. Azul was always good to me; former and current, they're a great bunch. One of the perks Azul allowed me was the ability to work (cross-continentally) remotely for four of my eight years with them.

As a teleworker, I soon discovered that a home office is not the ideal workplace for everyone. Even without spouse/kids, I'd find myself breaking up my day with various non-work errands and chores. This can lead to a pattern of "making up work time on nights and weekends" that may or may not actually happen. The disciplined approach of working a mostly contiguous day can become a slippery slope to cabin fever. In 2010, a trusted friend approached me with the idea of sharing costs on a telework space. I leapt (inside joke) at the chance.

This is Comcast country, so we split the bill on business class service with static IPs for remote access to our office network. I installed a small Intel Atom-based firewall/router running pfSense to handle network chores. Over time, I added static routes and dnsmasq config to allow persistent access to our respective company's networks via Linux-based VPN routers running under virtualization.

Alas, things change and paying for an office no longer makes sense. Being under contract with Comcast, I decided to move the business class service from office to home. The tech did the install yesterday, and I got my router working with the Comcast gateway today. So right now I have both residential and business networking.

MacBook Pro => wireless router => Comcast residential modem => Internet

MacBook Pro => wireless router => Comcast business class gateway => Internet

I'm still weighing whether I'll cancel my residential service entirely or keep it just for TV. Bundled pricing being what it is, Comcast seems to want me to cancel. Maybe (against hope) I can talk to a Comcast CSR who appreciates my dilemma.

As for my own networking setup, wireless routers are getting really powerful and pfSense is overkill for most home networks. This means out with the pfSense box and in with the ASUS RT-N66U wireless router running Tomato Shibby firmware. The N66U is what I would label prosumer quality, while the Tomato firmware makes this suitable for small office setups. Having used various stock and open firmwares over the years, this is really a thing of beauty when running on a full-featured router.

Sunday, March 24, 2013

Optimizing EDC for Spring

As winter becomes spring in the northern hemisphere, a person's mind turns to... optimizing one's set of everyday carry (EDC) items. My jacket pockets have become heavy with items not worthy of a shorts, trousers, or shirt pocket in warmer weather. Depending on one's profession, this list of items will vary. I'm a homeowner and IT geek in a past life, so my EDC layout reflects that. For some, EDC implies self-defense; that seems overly specific to me. I'm not fearful of attacks, but just want to be generally prepared for the tasks I may face on any given day. That said, I think nearly everyone can benefit from a good knife.

I used to carry a cheap jackknife until 9/11 permanently ejected that knife from my keychain. Abandoning the knife was a critical step to becoming pickier above knives as tools. I now have many quality folding knives, but I keep coming back to Spyderco knives for simplicity and quality for everyday use. I've pretty much settled on the Sage 2 (above in picture) and Ladybug 3 Hawkbill Salt (below in picture) for my EDC layout. The hawkbill is package opening perfection that can go on a keychain. The Sage 2 is a great, simple slicer that rivals knives over $300. Get informed and a quality blade will serve you well.


I also recommend a good LED flashlight bright enough to illuminate a walking path on a dark night. A more common use for me ends up being to illuminate computer labeling of jumpers and connectors. My 40+ year old eyes aren't what they used to be and more light really helps. I've been happy enough with a Fenix E05 on my keychain. In retrospect, I'd choose a light that can go on a keychain ring and still stand on its base to illuminate a small room or space.

I used to carry a wallet full of cards, paper money, and cruft. Some of the cards have been transferred to CardStar or other apps. Thankfully, my new debit card no longer has embossed numbers; it fits so much better now. I somewhat envy those in countries and cities that have made the transition to better e-payment; no such luck here yet. I've been experimenting with a card wallet and separate paper money clip for the last year plus. Alas, I think I'm heading back to a unified, yet thin, nylon wallet. I'm not a vegan, but leather is for murderers and old-school yuppies. ;-)

Living in a semi-rural area, I have both a car and truck. The car has an electronic keyfob with hidden, internal physical key. The truck is older and has a separate key and fob on a ring. Future and some current vehicles sort this out with app/Bluetooth-proximity entry and remote start. I want! And badly enough, that I may look into thirdparty remote unlock and starting systems. It's borderline stupid that one can't unlock and start one's car from one's phone in this modern age!

I should be carrying a USB stick, but haven't quite found what I want. Most of us probably have several old, small, semi-useless USB sticks. I want a single USB3 multi-stick (switchable boot partition) for modern use. Any recommendations? Eventually, smartphones will render temporary use of USB sticks obsolete.

Last, but not least, we have a smartphone. I started out with an original iPhone and upgraded every other release: iPhone, iPhone 3GS, iPhone 4S. Now I'm itching for a Nexus 4 or HTC One. The Nexus 7 was a cheap gateway device for me. The N7 weaned me off the Apple ecosystem and onto the Android ecosystem and Google services. Despite the recent Google Reader flack, Google is now giving me the services and features I want compared to Apple. Recent iTunes changes also seem like a disincentive to stay with Apple. Samsung nails it in their anti-Apple commercials from a few months ago: Apple has become the platform of choice for kids and grandparents! Don't get me wrong, I've been back with Apple since 2003 and still love my late-2008 MacBook Pro. Apple abandoned the enthusiasts, not the other way around!

Always soliciting input for little gadgets that others out there find indispensable. What are the critical items or philosophies behind your EDC layout?

Sunday, March 17, 2013

Super Slim HTPC Followup

Now that the new HTPC has been "in production" in the bedroom for a couple days, I have a few notes and comments for those considering a similar build.

SilverStone PT12B case

  • This case is functional, minimal and very attractive for HTPC use.
  • Ease of system assembly is the best of any case... ever! That's really a compliment for Mini-ITX.
  • The blue power LED is very bright and blinks when the system is in standby. This is enough to be very distracting in a dark room. I will absolutely end up disconnecting or putting tape over the LED.
  • The case is not overly sturdy, especially in the top center. I wanted to put my display (24" monitor) stand on top of the case, but don't trust it to support the weight. Putting an old standalone DVD player atop the case and the monitor stand on that works fine. The DVD player distributes weight to the corners of the PT12B. Some larger cases include a metal cross-brace and additional center/rear middle leg to accommodate weight. Honestly, I'm happy enough not to deal with an annoying cross-brace.
  • There is no provision for a horizontal expansion card. I don't see this as a big loss for HTPC in a slim chassis. Mini PCIe and USB provide decent enough expansion capabilities.
  • Cooling holes over the CPU are a nice touch for passively cooled (Atom) or very low-profile HSF systems. I could maybe see putting my old 35W Core i3-2120T in this case with a low-profile HSF.
  • It's not clear how efficiently cooled this case really is. That said, it does specifically accommodate the Intel HTS1155LP cooler. I feel like cooling could be optimized to allow better cooling of the motherboard, RAM, and SSD as well as CPU? That said, I'm not having problems at heavy load.

Intel HTS1155LP cooler

  • This is better quality than Intel's cheap OEM HSF units.
  • Thermal paste is separate and not pre-applied. I used Arctic Silver 5 instead. Seems like a plus not to have to scrape pre-applied thermal paste off the heat transfer pad.
  • Plastic under-motherboard support and screw attachment seems like a plus compared to Intel push pins w/ twist release. That said, how to know how hard to torque the screws?
  • Blower is quieter than expected. This system is installed at the foot of my bed and I can't hear it above ambient noise while playing video. It also didn't seem loud while doing games testing. I almost want it to be louder (higher RPM) under significant load?
  • Exhaust air is slightly warm. Compared to my desktop gaming PC I think this is a plus. I definitively know heat is being exhausted from the case.

Intel DQ77KB (Ivy Bridge) motherboard

  • Intel also offers a cheaper and older DH61AG board that would be perfectly fine for most HTPC.
  • DQ77KB provides SATA3, plentiful USB3, dual Ethernet, Mini-TOSLINK (via green audio out port), and DDR3 1600 RAM support.
  • PCIe x4 is wasted in a super slim case. How useful is PCIe x4 on this board? You'd want x16 for video; I guess x4 is useful for more networking or more storage. Honestly, this seems like a leftover; we have 4 PCIe lanes left so here they are!
  • Full and short length Mini PCIe provide good internal expansion for mSATA and Wi-Fi. The screws in the Mini PCIe standoffs can come torqued a bit too much. Is this the new source of badly designed frustration for small PC builders? I'm tempted to buy a separate set of standoffs w/ screws to replace the half faulty included ones!
  • The included mSATA port is only 3GB/s while my mSATA SSD supports 6GB/s. For shame! There are two regular 6GB/s SATA and two 3GB/s SATA ports on this board in addition. My secondary drive gets to be 6GB/s, the mSATA primary only 3GB/s!
  • I'm basically whining about a perfectly good low-profile motherboard. My complaints are either unnecessary optimization or trying to make this board into something is was never meant to be.

Intel Core i3-3225 3.3GHz dual-core CPU

  • Intel HD 4000 graphics are basically a 720p (not 1080p+) solution for gaming.
  • This is a very capable CPU. In retrospect, I should have not tried for gaming and saved myself some money.
  • It would be more tempting to downsize if these Socket 1155 Core i3 processors weren't all similarly rated at 55W TDP. I feel like Sandy/Ivy Bridge T (low-power) processors are priced a bit dear. The thing is, all Intels idle very efficiently these days.

Antec SN90P slim notebook power adapter

  • This power brick works great at full CPU and graphics load and is quite small compared to most power bricks.
  • Highly recommended so far!

Windows 8

  • The initial install went very smoothly.
  • Yeah, it's Windows 8 with all the desktop annoyances one would expect.
  • Surprisingly, I'm actually glad I went with 8 for an HTPC build. The native Windows 8 Netflix app is a nice advantage for my setup.
  • XMBC with cmyth PVR supports works are well as any other platform (Linux) I've tried.
  • As much as I'm a Linux and not Windows guy, Windows 8 makes a very usable HTPC appliance.

Conclusion

Having spent time mucking about with GNU/Linux PCs and Android/Linux devices to get a decent HTPC experience, this is the first time I feel like I've achieved it. This system wasn't cheap, although it could have been much cheaper and still gotten the job done. Someday ARM-based streaming devices will probably provide an equivalent or better experience, but we seem to be a few years away from that still. I'm very happy with this build, but would probably be nearly as happy with a downspecced DH61AG, cheaper Sandy Bridge, and 4GB RAM build. ARM-based Android media devices may be several times cheaper, but they don't get me where I need to go yet.

Supplication

I'd desperately like to know what hardware configurations others are running to meet their media streaming needs. I'm tempted to maintain a guide for currently acceptable HTPC build and configuration. For HTPC, it seems that savvy geeks would prefer to run the cheapest, friendliest (wife acceptance factor, etc.) systems that will do the job. For some this is (hacked) Apple TV, for others TiVo plus a separate (expert mode) HTPC, Android devices, other media boxes, friendly HTPC, and the list goes on. We're in this frustrating interim period between conventional broadcast/cable TV and pure Internet IPTV and many of us want to keep on top of the advancements. Thanks in advance for your help. See you all on my blog, your blogs, the message boards, and Google+!

Friday, March 15, 2013

Super Slim Intel HTPC Build

After messing around with massive cased x86 home theater PCs and Android media devices that aren't quite there yet, the goal was to assemble a slim, x86-based HTPC that is overkill for everything but gaming. This build is specifically designed around the low-profile Mini-ITX form factor as adopted by Intel and SilverStone.
  • Case: SilverStone PT12B super slim Mini-ITX
  • Motherboard: Intel DQ77KB low profile Mini-ITX
  • CPU: Intel Core i3-3225 Ivy Bridge 3.3GHz dual-core
  • Cooling: Intel HTS1155LP CPU cooler
  • Memory: Kingston KHX1600C9S3P1K2/8G DDR3 1600 SO-DIMM
  • Primary Storage: Mushkin MKNSSDAT120GB-DX mSATA
  • Secondary Storage: OCZ Agility 3 60GB SSD
  • Power Supply: Antec SNP90 slim notebook power adapter
Let's start with the SilverStone PT12B case, because that's where I started with this build. One look at this case and I knew this was the form factor I wanted.

I'm an old Mini-ITX guy, starting from the ancient VIA boards. I first became aware of low-profile Mini-ITX with Intel's Johnstown Atom board. Johnstown and an M350 case was tiny server perfection. I'm so glad Intel has taken this form factor to the next level with Socket 1155 support.

The SilverStone case pictured above is really small and cleanly designed. The case front from left to right is: air intake, power switch, reset switch, power LED, HDD LED, slim DVD bay. There's no front USB, audio, FireWire, or labeling. Clean is how I like it. This is reflected on the inside of the case as well. This requires an external 19V DC power brick that connects into the back of the motherboard, so there's no internal power supply, only the brick and DC to DC conversion on the motherboard itself. The left side of the case is specifically designed for Intel's HTS1155LP heat pipe, heat sink, and blower assembly. This seems to cool my 55W TDP Core i3 well enough. I don't think I'd try a >65W TDP CPU in this form factor. The Antec 90W external power brick is tiny and works great.

See how tiny and clean the motherboard with CPU and RAM look in this already very small case.


I can hardly imagine an easier Mini-ITX build or, indeed, any PC build. The Intel CPU cooling solution has a back plate that needs to go under the motherboard. The Intel-supplied blower fan mounting screws don't work in the SilverStone holes. Luckily, SilverStone supplies smaller screws that work just fine. You will definitely want a tiny phillips head screwdriver. Below are pictures of the installed cooler and fully cabled system. This is without the mSATA drive, as the mounting screws come installed and require (yet again) a tiny phillips screwdriver I needed to retrieve from elsewhere.

Installing Windows 8 via external USB DVD drive went flawlessly. Linux is my true love, but this system will be Windows for maximum compatibility with Flash and Silverlight video. I also run a MythTV and video storage backend on Linux, so I'm expecting this box to spend a lot of time in XBMC.


This is my first experience with Windows 8. The install was very quick and smooth. Setting up my secondary SSD was infuriating as management apps are hidden. Searching for "disk" helped not at all. One must simply type Windows-R and enter "diskmgmt.msc". Easy! ;-P Microsoft has both dumbed things down and made things less intuitive with separate Start and Desktop modes and hot corners. Do I miss the Start menu even though I used to hate it? Hell yes! The sad part is, I feel like it would have been almost trivial for Microsoft to produce a touch-friendly but not desktop-averse shell. Hot corners are an "expert" feature that have been forced as a default.

This is also my first experience actually using Intel HD graphics for anything at all intensive. My CPU features the more capable HD 4000. Newer games like Borderlands 2 are borderline playable at lower quality 720p. Older games like L4D2 are quite playable at 720p. Basically, high end Intel graphics are a 720p, low to medium quality gaming solution. This machine will spend most of its life streaming video at 1080p (or less), and for that it should work splendidly. This is an expensive solution for that, but a lower spec CPU and smaller SSD would work fine for HTPC use.