Refactoring our House Network

Our home network transitioned a few years ago to a fairly standard home network powered by MacOS Server and network equipment from various companies. As I added smart home functionality the network grew organically and became increasingly difficult to manage. I solved that partially by chance and also by moving to UniFi.

Then Apple decided to drop MacOS Server. MacOS Server was nothing more than a nice interface facilitating the management of common open source tools, but replacing it with those underlying tools never really worked for me. The prime example was MacPorts’ ISC DHCP server, which always stopped working on Monday mornings at 09:00:

“Tell me why!” “I don’t like Mondays.”

I never did find out why.

About the same time I realised that sooner or later I would be handing over the house – with its smart stuff – to someone else, so it needed to be stable, documented, and simple to use and maintain.

Two-Tier Network with Redundancy

The first stage was the creation of a two-tier network so that smart home functionality was separate from the residents’ personal computing environments. The house stuff has different reliability requirements. Not the least of these is the requirement to work always, not only 24×7 but also after the residents have moved out.

The two-tier network was – ultimately – easy to achieve with VLANs and UniFi equipment.

Redundancy has been achieved by adding two Raspberry Pi (RPi) 4B+ servers to the core VLAN. These provide the following functionality to the network:

  • LDAP
  • DHCP
  • DNS
  • FreeRadius
  • NTP

The LDAP servers replicate each other and provide storage for DHCP, DNS and FreeRadius. They also store user credentials for network logins. This works so well that it is not usually obvious if one of the machines goes down.

Single Image

Just keeping the operating system images of the two core RPis in sync is difficult enough, though it works well enough. However, OS updates have to be carried out twice, which is boring and holds potential for creating inconsistencies.

Therefore, I would like all the RPi servers in the network to share the same OS image, network booted from the central NAS.

The RPi servers implement different applications, so those individual applications should be implemented in layers on top of the core OS image.

Single Maintenance

Once you have a single, central image you can maintain and update it once too instead of once for each RPi.

No SD

An obvious advantage of network booting is that an SD is no longer needed. For me this is a secondary goal, but desirable all the same.

Easy Replacement

An RPi is relatively cheap and easy to replace. An RPi can break or otherwise need replacement. It would be nice if this could be achieved in minutes rather than days, especially for smart home functionality where part of the home may no longer be functional. Replacement can be reduced to the following steps:

  • update new RPi’s EEPROM;
  • add the new RPi’s information (i.a. MAC number) to the LDAP database;
  • physically swap the new RPi in for the old one;
  • boot the new RPi.

This is still not quite as simple as changing a light bulb, but it is a significant step towards commoditisation.

This ease of replacement has financial implications (replace one RPi versus replace a large central server) and allows us to keep a reserve machines on the off-chance that one might go down.

Environment

This section summarises the core technologies used. The network consists almost exclusively of UniFi equipment:

A major plus of UniFi for me has been the UniFi Network Application that runs on the Dream Machine Pro.

My operating system of choice is the latest Ubuntu LTS edition: this has historical reasons which I have not really analysed. However, I continue to be happy with my choice.

There is a central NAS, which is intended to store all network data. It is a Synology DS1618+, which has 6 disks organised as three RAID 1 storage pools, each containing a single volume. This provides:

  • TFTP
  • NFS
  • iSCSI
  • Docker instances
  • network backup
  • user cloud storage
  • Webmin
  • Dokuwiki (network documentation)

I also use Webmin on all my servers.

I have never taken to Kerberos, though I accept that there is not really any alternative. I implemented it once in the pre-MacOS Server stage of my network development and found it horrible to manage. The fact that it was built in to MacOS Server was a significant reason for me to move to MacOS Server. In the meantime I get by with SSH keys and the knowledge that a user’s password is the same everywhere in the network.

Smart Home

Smart home functionality is currently provided by OpenHAB 2.5. This also requires fundamental refactorisation, although the house now runs reasonably reliably after the network refactorisation.

I want to move to a Karaf Cellar cluster of RPi servers running openHAB 3.x with EnOcean coverage for the whole house. This allows the integration of various smart home technologies into a – more or less – homogenous whole. These technologies include:

  • go-eCharger
  • Homematic
  • Philips Hue
  • Miele
  • MQTT
  • Nanoleaf
  • SolarEdge
  • EnOcean
  • Apple HomeKit

At some point in the none-too-distant future we will be replacing the windows in our house and will integrate them into the smart home too.

To Do

Further functions planned for the network include:

  • centralised logging (probably FluentD and something else, presumably in Docker on the Synology NAS);
  • service monitoring (SNMP and one of OpenNMS/Zabbix/LibreNMS/Prometheus/…) so I know when something stops working.

Workload

Running a house network that supports smart home functionality reliably and sustainably is no insignificant undertaking. At some point you cross the border between a hobby and support for a real-life tool. You are also highly dependent on the tolerance of the other people that you share your house with.

In terms of complexity and effort you have about the same as a small SME, which might actually have someone employed – at least part of the time – to support the network. Down the line standards might evolve for house networks that mean that you can outsource that work to someone else. In the meantime you have to deal with it yourself. Whatever you do, don’t forget to document it.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.