Bringing Back HDHomeRun Signal: A Modern Web Replacement for the Discontinued Android App

If you’re an HDHomeRun user who relied on the official Signal Android app for antenna alignment and signal monitoring, you’ve probably noticed it’s been discontinued. After getting frustrated with the lack of a good replacement, I decided to build my own modern web-based solution that not only recreates the original functionality but actually improves on it.

What is HDHomeRun Signal Web?

HDHomeRun Signal Web is a containerized web application that provides real-time signal monitoring, channel tuning, and device management for HDHomeRun devices. It runs on any platform with Docker and automatically discovers your HDHomeRun devices on the network.

Why Build This?

The original Android app was invaluable for antenna alignment and signal troubleshooting. When it disappeared, there wasn’t a good cross-platform alternative that provided the same level of detail and real-time monitoring. I wanted something that:

  • Works on any device with a web browser
  • Can run 24/7 on low-power hardware
  • Provides real-time signal metrics
  • Supports modern features like ATSC 3.0

Key Features

Device Management

The app automatically discovers HDHomeRun devices on your network and lets you switch between devices and tuners through an intuitive interface. No manual IP configuration needed.

Real-Time Signal Monitoring

Get live updates of:

  • Signal Strength in dBm (raw power level)
  • SNR Quality in dB (signal-to-noise ratio)
  • Symbol Quality (error correction quality)

These metrics update in real-time as you adjust your antenna or scan channels.

Antenna Tuning Mode (New!)

This is where the app really shines. The antenna tuning mode lets you monitor all tuners simultaneously with real-time graphs showing signal strength and SNR quality over the last 60 seconds. Each tuner displays a color-coded badge:

  • Green (100%): Perfect signal lock – your antenna is properly aligned
  • Red (<100%): Signal present but needs improvement
  • Gray (0%): No signal detected

This makes antenna alignment much faster and more precise than the old app ever was.

ATSC 3.0 Support

For NextGen TV broadcasts, the app displays PLP (Physical Layer Pipe) and L1 signaling information, giving you deeper insight into modern broadcast signals.

Progressive Web App

Install it on your mobile device for a native app-like experience. It works offline once installed and provides the convenience of the original Android app.

Perfect for Raspberry Pi

While this runs anywhere Docker is supported, a Raspberry Pi is the ideal platform:

  • Low power consumption for 24/7 operation
  • Small form factor to place near your antenna
  • More than enough power for signal monitoring
  • Cost-effective dedicated hardware

The Docker image supports both AMD64 and ARM64 architectures with prebuilt images on Docker Hub, so deployment is instant on any platform.

Getting Started

Deployment couldn’t be simpler. Just create a docker-compose.yml file with this content:

yaml

version: '3.8'
services:
  hdhomerun-signal:
    image: petelombardo/hdhomerun-signal-web
    network_mode: host
    restart: unless-stopped
    environment:
      - NODE_ENV=production
      - PORT=3000
    volumes:
      - /etc/localtime:/etc/localtime:ro

Then run:

bash

docker-compose up -d

That’s it! Access the web interface at http://your-server-ip:3000 and the app will automatically discover your HDHomeRun devices.

No cloning, no building, no configuration files. The prebuilt images on Docker Hub handle everything.

Technical Stack

  • Frontend: React with Material-UI for a clean, modern interface
  • Backend: Node.js with Express
  • Real-time Communication: WebSockets via Socket.io
  • HDHomeRun Integration: Uses the official hdhomerun_config command-line tool
  • Deployment: Multi-arch Docker images (AMD64/ARM64) on Docker Hub

Use Cases

Antenna Alignment

The simultaneous multi-tuner monitoring mode makes antenna alignment significantly easier. You can watch signal quality across all tuners in real-time as you adjust your antenna position.

Signal Troubleshooting

Quickly identify weak channels or reception issues by tuning through channels and monitoring the signal metrics.

Multi-Device Monitoring

If you have multiple HDHomeRun devices, you can easily switch between them to check signal quality across your setup.

What’s Next?

I’m continuing to improve the app based on real-world usage. Some ideas in the pipeline:

  • Channel scanning and favorite lists
  • Historical signal logging
  • Mobile-optimized tuning controls
  • Email/push notifications for signal issues

Try It Out

The project is available on:

If you’re an HDHomeRun user who misses the Signal app or just wants better tools for antenna optimization, give it a try. The entire deployment takes less than a minute!

Feedback and contributions are always welcome.

Vibe Coding – The future of software development?

Before I even knew what “vibe coding” meant, I had already vibe coded my first app. In fact, anyone who has used AI to code on their behalf has done the same. Vibe coding is essentially coding by describing what you want rather than writing syntax. So what’s the low-down on the state of things and where is the technology headed? Read on for my take.

Just a couple of short years ago, I had asked an LLM to help me edit an advanced script that I had written. The LLM broke my code, and I decided to add my desired add-on feature myself. That was then. This is now. Today, LLMs are much more powerful, and capable of understanding much more complex code thanks to larger context lengths and larger/smarter models. It is now realistic for anyone to ask an LLM to write code for them, regardless of their programming experience, and to have a working prototype (at a minimum) delivered to them soon thereafter. So how does this change programming for software developers?

If LLMs are writing the code, then what role do developers play, if any, in this new paradigm. Well, the “developer” role is going to shift from being the code-writing liaison for people (product managers), to being the people-liaison to the code writers (LLMs). Basically, developers will become more feature managers than coders and their scope of responsibility will be largely to prompt the LLMs to build the features, oversee the planning and coordination among disparate code blocks (I.e. feature co-architects), ensuring sensible efficient designs. They will oversee and ensure secure coding design principles are followed. Finally, they will conduct human usability, user experience, and feature testing (quality assurance).

Even with all of the developments in AI/LLMs recently, there are still some blind spots. For example, LLMs have the ability to “see” problems, but they still need human intervention to reproduce many of them. Also, LLMs can have add unnecessary complexity to code. Often , they violate Occum’s razor – the principle that simpler solutions are preferable—generating overly complex code when a straightforward approach would suffice

This is an exciting time for the software development field, and we are going to see rapid development of new innovative apps. Developers will shift from thinking about features in terms of “work”, and they will think about features in terms of “impact”. Since LLMs are doing the grunt work, focusing on the impact of new features is going to be the new top priority. Companies need to partner LLMs with creative minds in their organizations, not just developers, to best utilize these new coding capabilities. And for anyone worried about job displacement — as long as there are people using these apps, there will be people designing and testing them. The jobs will shift, but the explosion in productivity on the development side will open new opportunities for testers and feature managers. There will still be jobs!

RestartOS

One of my personal projects for many years, restartOS, has seen some good active development lately, mostly for ARM based systems (think Raspberry Pi, Orange Pi, etc). The premise of it is that I take a Debian-based Linux distribution and I finess it so that it runs completely in memory. It is very similar to the way a live-boot ISO works, but with some key differences. The most important is that it’s not a pre-installation environment. It is a full OS intended to be used for whatever creative purposes you can find for it. Because it does allow for you to flag some files as being persistent, you can customize it as, say, a file server, and mount external storage as read/write and then every time your system reboots it will start up your file server for you. This is one of many use-cases. It can run containers (using Podman), which opens the door to running any number of services on it. But at the end of the day, the OS itself is non-persistent, meaning any changes made while it is running are lost on reboot. This makes the OS more secure, since it is much harder for a virus to get installed on it. It also makes the OS more reliable, since a reboot resets the OS back to the same configuration that it had when it first booted. It also makes it fast. There are packages that can be installed (like the UI, Chromium (Chrome’s open source counterpart), Podman, GlusterFS, Filezilla, etc.). The base OS is less than 300MB! Podman add less than 20MB to the base. Just with those 2 packages, you get a container server that can boot from a microSD card and since there are no regular writes to the card, it can last for decades.

https://www.restartos.com

Tinnitus

Oh man, what a month. It started when I put on a hockey helmet that was too small and when I snapped the cage to behind my ears, the snap was so loud that it caused my hearing to go out in my right ear for a few seconds. When it came back, everything seemed fine. But later that night, when I was in a quiet room, I noticed my ear ringing — tinnitus. At first it didn’t phase me, I just figured after a few days it would go away. I also figured that I needed to stay away from loud sounds until it went away. But after a month passed, the ringing was even louder, and it had transitioned to both ears, not just one. So I went to an audiologist and was told that basically, the loud noises probably triggered a freak-out moment for my brain and that’s why my hearing went out momentarily, and when it came back, the baseline for my hearing was basically wrong. So instead of hearing normally, my brain was processing all the garbage signals that my ears were sending when there was no real noise in the room. The audiologist told me to play music in the house and not to work in an absolutely quiet environment. So that’s what I have been doing for 2 days now, and it really has helped. I still hear some ringing at a very high pitch, but at a very low volume. Hopefully it continues to diminish until my brain forgets about this whole thing and then I can move on too.

AGI Dangers

Artificial General Intelligence, or AGI, has been the holy grail of AI for many years. With the ground-breaking advent of Large Language Models (LLMs), we took leaps forward toward that goal. Now with chain-of-thought prompting, reasoning and self-feedback are among the last hurdles that I would expect before we achieve AGI. So, as a bit of a thought experiment, I considered what life will be like once AGI arrives, and how it will change the way we function and what new vulnerabilities it may introduce.

Imagine an AGI that is connected everywhere you go. It’s on your phone, it’s on your Amazon Echo, it’s ever-present. You type an email and think that the 7th is Tuesday, but it’s really Wednesday and the AGI picks up on it and fixes it for you. You ask it to remind your wife to pick up the kids and it knows how to reach her and what time the kids need to be picked up already. You get home and look at your doorbell camera and say, “Open the garage” and it does a voice and face analysis and then fulfills your order. Life is good.

Many people believe that LLM technology will underpin AGI, so it seems reasonable that many pervasive vulnerabilities of LLMs will carry-through to AGI. With most LLMs there are ways around the protections of their programmed restrictions. For example, asking an LLM to teach you how to make your own black powder might invoke a firewalled response, of sorts. But asking an LLM to pretend it is your grandmother reading you a book about how to make black powder many times will bypass its programmed restrictions. This opens the door to similar types of attacks on AGI.

Since AGI will likely be pervasive in our lives, it stands to reason that it will have access to all of our IoT devices as well. Imagine an attacker showing up at your door and reasoning with the AGI that they have a perishable delivery that needs to be immediately put in the refrigerator, only to have the AGI unlock your house to a complete stranger. This is the scenario that I worry about most with AGI. We need to be very careful how we grant access to something capable of making decisions on our behalf.

VMs are dead. Long live containers.

In the early 2000’s, we were introduced to virtual machines promising the ability to layer multiple complex environments on top of the same hardware without interfering with one another. We graduated from single-purpose servers to multi-purpose servers, greatly improving efficiency and allowing data centers to outgrow their physical footprint in terms of traditional usefulness. Fast-forward to the present time and container technology, that has been around for decades as well, has matured to the point that most services that we would run on VMs can now run from containers.

What’s the big deal? Well, at the end of the day, virtual machines emulate hardware and containers do not. This gives containers an edge over VMs regarding performance. But it goes deeper than that. VMs are generally big and bulky. Containers are generally small and modular. Container technology allows for faster backup and recovery times, tighter standardization (through compose yaml files), all in addition to the aforementioned performance boost.

This is why I run as many services as I can within containers, and only the few that are not yet fully supported are run from VMs. But the clock is ticking for those few services.

Boeing’s 737 Max Problem

To understand Boeing’s challenge, consider if you were to build a house with 2000 sq. ft. You spec an air conditioner and heater out for it and run all of the ducts and everything is perfectly working as designed. Now you want to add a room. So you add a 500 sq. ft. room. you split a nearby duct and feed the new room. But your HVAC system is not designed for 2500 sq. ft. and the ducts are not run from your HVAC unit directly to the new room. So now you have a consequence of having diminished the HVAC effectiveness in the existing areas, and also in the new area. To fix the problem, you develop fancy software to control electronic louvers that route air to where it’s needed so that you can prevent hot/cold spots. Now you’ve introduced something else that, if it breaks, can jeopardize the effectiveness of your entire HVAC system. At the end of the day, you have compromised the initial design in a way that can only truly be corrected with a completely new design. Anything but, just adds complexity (more things that can break).

Back to the 737 issues – no matter what Boeing does to fix the 737 Maxes, and even if they are successful in their quest, at the end of the day, they’ve added complexity which reflexively makes their modified 737 Max design inferior to a new design/model. The introduction of and critical dependence on the MCAS software is a byproduct of their modification to a pre-established design. Put simply, MCAS is yet another potential point of failure of the entire aircraft. We want to minimize potential points of failure, not add to them. MCAS would never be a critical feature of a new design, and this is why I believe that Boeing is going to eventually need to discontinue production of the 737 max produce an entirely new plane.

Container Wars – Podman vs. Docker

This article is intended for developers familiar with Docker and container management who are curious about trying Podman.

I recently began a quest to test Podman to explore its features and assess its feasibility as a Docker replacement.

The first thing I had to do was add the docker.io registry to podman so that I could basically use podman as a drop-in replacement to manage my docker-compose.yml files.

sed -i /etc/containers/registries.conf -e 's/^.*unqualified-search-registries .*$/unqualified-search-registries = ["docker.io"]/'

Now, I wanted to use podman-compose, but quickly discovered that this application is still undergoing many bug fixes. Installing the apt version of podman-compose, for example, gave me version 1.0.3, but it was actually version 1.0.6+ that I needed in order to work past a bug that prevented one of my host-network containers from starting. As the apt version wasn’t suitable, I opted for a pip3 installation, which offered version 1.0.6+ (the newest version with less bugs).

pip3 install podman-compose

But when I ran it, I received the following.

error: externally-managed-environment.

So I tried this command – with success.

pip3 install podman-compose --break-system-packages

Great! Now I was finally moving along. But I wanted to run pihole (a DNS server) in a container, but when starting it, I received an error.

Error: cannot listen on the UDP port: listen udp4 :53: bind: address already in use
exit code: 126

Back to digging to figure out how to fix this. Apparently, Podman uses a DNS resolver called aardvark, and it’s configured in a file at /use/share/containers/containers.conf. It’s possible to change the DNS port. But, as I learned, the changes do not take effect until every pod/container is shut down. I made the following change…

sed -i /usr/share/containers/containers.conf -e 's/#dns_bind_port.*$/dns_bind_port=54/'

Now, after stopping all of my containers and starting them all again, I was almost there. I noticed something peculiar. The start order matters. If I started pihole first, then any pods started after it would fail due to the inability for them to resolve names of the other containers. The trick was simply to start pihole last!

And that’s it! I have over a dozen containers running now and they seem surprisingly more peppy than when I ran them in Docker. But that may just be my brain trying to justify all of the hours that I spent figuring out how to make this transition work.

Overall, transitioning to Podman presented challenges, but I gained valuable insights and found it surprisingly performant. While Docker remains familiar, Podman’s security focus and rootless operation are intriguing, especially for long-term use.

GlusterFS Optimized for VMs (Ultra-Low-Cost)

This is a 4-node glusterfs cluster setup as a replica 3 arbiter 1 cluster. I use 512MB shards, reducing fragmentation of VM disks without hurting performance. My disks are all backed by SSDs. Every node has up to 2 bricks on 2 different disks. Every node is an Android TV H96 MAX X3 box running Armbian with disks attached via the USB 3.0 port. I am able to reboot any node without data loss.

Number of Bricks: 2 x (2 + 1) = 6
Transport-type: tcp
Bricks:
Brick1: amlogic1:/mnt/gluster1/brick2
Brick2: amlogic2:/mnt/gluster1/brick2
Brick3: amlogic4:/mnt/arbiter/arb2s1-2 (arbiter)
Brick4: amlogic3:/mnt/gluster1/brick2
Brick5: amlogic4:/mnt/gluster1/brick2
Brick6: amlogic2:/mnt/arbiter/arb2s3-4 (arbiter)


GlusterFS Volume Options
diagnostics.count-fop-hits: on
diagnostics.latency-measurement: on
performance.cache-refresh-timeout: 10
performance.cache-size: 2GB
storage.fips-mode-rchecksum: on
performance.strict-o-direct: on
features.scrub-freq: daily
features.scrub-throttle: lazy
features.scrub: Inactive
features.bitrot: off
storage.batch-fsync-delay-usec: 0
performance.nl-cache-positive-entry: off
performance.parallel-readdir: off
performance.cache-max-file-size: 512MB
cluster.server-quorum-type: server
performance.readdir-ahead: on
network.ping-timeout: 10
features.shard-block-size: 512MB
client.event-threads: 5
server.event-threads: 3
cluster.data-self-heal-algorithm: full
cluster.shd-max-threads: 16
cluster.shd-wait-qlength: 8192
server.allow-insecure: on
features.shard: on
cluster.quorum-type: auto
network.remote-dio: on
cluster.eager-lock: enable
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
cluster.locking-scheme: granular
performance.low-prio-threads: 20
cluster.choose-local: off
features.cache-invalidation-timeout: 600
performance.stat-prefetch: on
performance.cache-invalidation: on
performance.md-cache-timeout: 10
network.inode-lru-limit: 32768
cluster.self-heal-window-size: 8
cluster.granular-entry-heal: enable
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: on


And a custom sysctl.d config:
#/etc/sysctl.d/999-gluster-tuning.conf
vm.admin_reserve_kbytes = 8192
vm.compact_unevictable_allowed = 1
vm.compaction_proactiveness = 20
vm.dirty_background_bytes = 0
vm.dirty_background_ratio = 5
vm.dirty_bytes = 0
vm.dirty_expire_centisecs = 3000
vm.dirty_ratio = 5
vm.dirty_writeback_centisecs = 500
vm.dirtytime_expire_seconds = 3600
vm.extfrag_threshold = 500
vm.hugetlb_shm_group = 0
vm.laptop_mode = 0
vm.legacy_va_layout = 0
vm.lowmem_reserve_ratio = 32 32 32 0
vm.max_map_count = 65530
vm.memory_failure_early_kill = 0
vm.memory_failure_recovery = 1
vm.min_free_kbytes = 36200
vm.mmap_min_addr = 65536
vm.mmap_rnd_bits = 18
vm.mmap_rnd_compat_bits = 11
vm.nr_hugepages = 0
vm.nr_overcommit_hugepages = 0
vm.oom_dump_tasks = 1
vm.oom_kill_allocating_task = 0
vm.overcommit_kbytes = 0
vm.overcommit_memory = 1
vm.overcommit_ratio = 50
vm.page-cluster = 3
vm.page_lock_unfairness = 5
vm.panic_on_oom = 1
vm.percpu_pagelist_high_fraction = 0
vm.stat_interval = 1
vm.swappiness = 10
vm.user_reserve_kbytes = 128364
vm.vfs_cache_pressure = 50
vm.watermark_boost_factor = 15000
vm.watermark_scale_factor = 10

Technology – Cost, Risk, and Reward

As a trusted decision maker in the technology space, I am constantly evaluating the following three things. Cost, risk, and reward. When choosing the right technology to solve a problem, not evaluating all three areas is, at best, wasteful – and at worst, exceedingly risky.



Take for example the tried and true example of telephone service. What is the cost, risk, and reward profile of a standard copper-delivered (POTS) telephone service to a company? Well, the cost is relatively high, the risk is relatively low, and the reward is relatively low – it does what it’s supposed to do, and not more. But if we consider VoIP, now we have a lower cost, a higher risk (Internet issues may influence quality or availability), and higher reward (flexible vendor choice, mobility options, etc.). Risk typically increases as cost decreases for different solutions. If cost is the primary driver of a solution, then risk will likely be higher. When stability is the driving factor for a project or application, expect to pay a premium cost.

Whenever choosing technology for your company, always remember the three main decision points – cost, risk, reward – and choose wisely.