Linksys WRT1200AC with OpenWRT/LEDE

I use OpenWRT/LEDE running on two Linksys WRT1200AC wireless routers as the network infrastructure of the house.

One WRT1200AC acts as the Internet gateway. It is configured to break the network into three separate subnets, two of which can route to each other. The third subnet is a “restricted” network that has Internet access, but is not allowed access to the other subnets. It also provides WiFi access for the west half of the house.

The second WRT1200AC acts as a WiFi extender for the east half of the house, plus wired access for a few devices that require a wired connection.

    Future Plans:
  • Create a second “guest” WiFi SSID on the “restricted” network to prevent guest devices from having access to the devices on the main and lab subnets
  • Configure both WRT1200AC devices to use VRRP as a fault-tolerant access point in case of a router malfunction or (more likely) outage due to router upgrade or other maintenance

Consul Health Checks

I use Consul as a method of monitoring the health of the various devices and services.

The consul-alerts program issues a message to a monitoring channel in my family’s Slack workspace within two minutes of the failure or restoration of any node or service. This alerting mechanism allows me to warn family members attempting to use the service that the system is not properly working, and if possible log in to triage and address whatever has failed.

    Future Plans:
  • Configure collectd and Kibana to record historical data for system load and other health metrics
  • Configure OpenWRT-based router to access the local Consul instance’s DNS server so Consul-registered nodes and services are more easily accessible by nodes that are not running a Consul agent


I use ownCloud to automatically upload photos from my family’s phones as well as share documents across family members’ accounts.

    Future Plans:
  • Upgrade to nextCloud
  • Add collaborative support for LibreOffice documents
  • Run ownCLoud/nextCloud on a separate VM rather than on my main Fedora Server

Jenkins CI/CD

I have Jenkins build jobs configured for various software projects. These jobs automatically merge Git branches to master, and then build and test the merged code within a Docker container. If all tests pass, the jobs deploy the generated RPM packages for i386, x86_64, and arm64 hardware architectures to my Fedora Yum/DNF repository and then push the merged changes to the upstream Git master for that project.

    Future Plans:
  • Enable the pipeline functionality provided by Jenkins 2.x to split the build, package, and test phases into separate jobs.

Hubot Instances

I maintain two Hubot instances, one to interact with friends through a Slack workspace configured by a college roommate, and one to interact with me on my family’s Slack workspace so I can run health queries and perform some level of resource management while I’m away from my house.

I’ve dabbled in a bit of Node.js and Coffeescript to enhance these Hubots with a few abilities such as:

  • Wish my friends a happy birthday when prompted
  • Display the DilbertTM comic for a specific day or, if no day is specified, the current day
  • Query the list of Consul services on my LAN
  • Provide information about the health of a specified Consul service


I use libvirt and KVM to manage a set of viritual machines on my main Fedora Linux server. These virtual machines include:

  • Jenkins server
  • Jenkins slave for all Fedora versions I use in deployed systems (currently Fedora 27 and the Fedora 28 Beta)
  • Bugzilla instance to track bugs and work items for various projects
  • Hubot server, running two Hubot instances, one for a Slack workspace with friends and one for a Slack workspace for my family
  • Minecraft server to blow off steam and have fun with my kids, nieces, and nephews

Vagrant Test Enivorment

I use Vagrant with the libvirt/KVM plugin to create and provision test instances of each system in my infrastructure. This test environment allows me to keep pace with new Fedora beta and GA releases and verify my system configuration scripts and software projects in the evolving Fedora environment.

    Future Plans:
  • Add CentOS, Debian, and Ubuntu configuration to more closely mirror my work systems and my Debian-based Raspbian and OSMC nodes

Ansible/Puppet Configuration

I use Puppet for system configuration, and am transitioning to Ansible mainly because I want to gain experience with this configuration tool for work. As I get familiar with Ansible, I’m impressed with the configuration abilities that seem advanced compared to the way I had been using Puppet.

    Future Plans:
  • Complete transition to Ansible and deprecate use of Puppet

OSMC Media Centers

I maintain three Raspberry Pi-based OSMC nodes.

  1. One node acts as the main media center in our living room, serving movies and music to entertain the family.
  2. One node acts as a WiFi-accessible tablet which connects to a small radio. It allows us to listen to our music collection during family meals and while doing homework or spending time together. This node also doubles as a move player when we take it for extended trips, connecting it to an external hard drive for access to the family’s movie and music collection.
  3. One node acts as a second media center in the basement, theoretically serving movies and music to a small television. This node does not perform as well, however, as it is an original Raspberry Pi connected through a slow USB WiFi adapter.

CoolNAS - in hibernation

This project is in hibernation mode. It enables remote back ups by synchronizing BTRFS snapshots across a wide-area network. The concept can be extended to any next-generation file system (such as BTRFS or ZFS), and allows redundancy while minimizing the amount of data transferred across the WAN.