Boeing’s 737 Max Problem

To understand Boeing’s challenge, consider if you were to build a house with 2000 sq. ft. You spec an air conditioner and heater out for it and run all of the ducts and everything is perfectly working as designed. Now you want to add a room. So you add a 500 sq. ft. room. you split a nearby duct and feed the new room. But your HVAC system is not designed for 2500 sq. ft. and the ducts are not run from your HVAC unit directly to the new room. So now you have a consequence of having diminished the HVAC effectiveness in the existing areas, and also in the new area. To fix the problem, you develop fancy software to control electronic louvers that route air to where it’s needed so that you can prevent hot/cold spots. Now you’ve introduced something else that, if it breaks, can jeopardize the effectiveness of your entire HVAC system. At the end of the day, you have compromised the initial design in a way that can only truly be corrected with a completely new design. Anything but, just adds complexity (more things that can break).

Back to the 737 issues – no matter what Boeing does to fix the 737 Maxes, and even if they are successful in their quest, at the end of the day, they’ve added complexity which reflexively makes their modified 737 Max design inferior to a new design/model. The introduction of and critical dependence on the MCAS software is a byproduct of their modification to a pre-established design. Put simply, MCAS is yet another potential point of failure of the entire aircraft. We want to minimize potential points of failure, not add to them. MCAS would never be a critical feature of a new design, and this is why I believe that Boeing is going to eventually need to discontinue production of the 737 max produce an entirely new plane.

Container Wars – Podman vs. Docker

This article is intended for developers familiar with Docker and container management who are curious about trying Podman.

I recently began a quest to test Podman to explore its features and assess its feasibility as a Docker replacement.

The first thing I had to do was add the registry to podman so that I could basically use podman as a drop-in replacement to manage my docker-compose.yml files.

sed -i /etc/containers/registries.conf -e 's/^.*unqualified-search-registries .*$/unqualified-search-registries = [""]/'

Now, I wanted to use podman-compose, but quickly discovered that this application is still undergoing many bug fixes. Installing the apt version of podman-compose, for example, gave me version 1.0.3, but it was actually version 1.0.6+ that I needed in order to work past a bug that prevented one of my host-network containers from starting. As the apt version wasn’t suitable, I opted for a pip3 installation, which offered version 1.0.6+ (the newest version with less bugs).

pip3 install podman-compose

But when I ran it, I received the following.

error: externally-managed-environment.

So I tried this command – with success.

pip3 install podman-compose --break-system-packages

Great! Now I was finally moving along. But I wanted to run pihole (a DNS server) in a container, but when starting it, I received an error.

Error: cannot listen on the UDP port: listen udp4 :53: bind: address already in use
exit code: 126

Back to digging to figure out how to fix this. Apparently, Podman uses a DNS resolver called aardvark, and it’s configured in a file at /use/share/containers/containers.conf. It’s possible to change the DNS port. But, as I learned, the changes do not take effect until every pod/container is shut down. I made the following change…

sed -i /usr/share/containers/containers.conf -e 's/#dns_bind_port.*$/dns_bind_port=54/'

Now, after stopping all of my containers and starting them all again, I was almost there. I noticed something peculiar. The start order matters. If I started pihole first, then any pods started after it would fail due to the inability for them to resolve names of the other containers. The trick was simply to start pihole last!

And that’s it! I have over a dozen containers running now and they seem surprisingly more peppy than when I ran them in Docker. But that may just be my brain trying to justify all of the hours that I spent figuring out how to make this transition work.

Overall, transitioning to Podman presented challenges, but I gained valuable insights and found it surprisingly performant. While Docker remains familiar, Podman’s security focus and rootless operation are intriguing, especially for long-term use.

GlusterFS Optimized for VMs (Ultra-Low-Cost)

This is a 4-node glusterfs cluster setup as a replica 3 arbiter 1 cluster. I use 512MB shards, reducing fragmentation of VM disks without hurting performance. My disks are all backed by SSDs. Every node has up to 2 bricks on 2 different disks. Every node is an Android TV H96 MAX X3 box running Armbian with disks attached via the USB 3.0 port. I am able to reboot any node without data loss.

Number of Bricks: 2 x (2 + 1) = 6
Transport-type: tcp
Brick1: amlogic1:/mnt/gluster1/brick2
Brick2: amlogic2:/mnt/gluster1/brick2
Brick3: amlogic4:/mnt/arbiter/arb2s1-2 (arbiter)
Brick4: amlogic3:/mnt/gluster1/brick2
Brick5: amlogic4:/mnt/gluster1/brick2
Brick6: amlogic2:/mnt/arbiter/arb2s3-4 (arbiter)

GlusterFS Volume Options
diagnostics.count-fop-hits: on
diagnostics.latency-measurement: on
performance.cache-refresh-timeout: 10
performance.cache-size: 2GB
storage.fips-mode-rchecksum: on
performance.strict-o-direct: on
features.scrub-freq: daily
features.scrub-throttle: lazy
features.scrub: Inactive
features.bitrot: off
storage.batch-fsync-delay-usec: 0 off
performance.parallel-readdir: off
performance.cache-max-file-size: 512MB
cluster.server-quorum-type: server
performance.readdir-ahead: on 10
features.shard-block-size: 512MB
client.event-threads: 5
server.event-threads: 3 full
cluster.shd-max-threads: 16
cluster.shd-wait-qlength: 8192
server.allow-insecure: on
features.shard: on
cluster.quorum-type: auto
network.remote-dio: on
cluster.eager-lock: enable off off
performance.quick-read: off
cluster.locking-scheme: granular
performance.low-prio-threads: 20
cluster.choose-local: off
features.cache-invalidation-timeout: 600
performance.stat-prefetch: on
performance.cache-invalidation: on 10
network.inode-lru-limit: 32768
cluster.self-heal-window-size: 8
cluster.granular-entry-heal: enable
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: on

And a custom sysctl.d config:
vm.admin_reserve_kbytes = 8192
vm.compact_unevictable_allowed = 1
vm.compaction_proactiveness = 20
vm.dirty_background_bytes = 0
vm.dirty_background_ratio = 5
vm.dirty_bytes = 0
vm.dirty_expire_centisecs = 3000
vm.dirty_ratio = 5
vm.dirty_writeback_centisecs = 500
vm.dirtytime_expire_seconds = 3600
vm.extfrag_threshold = 500
vm.hugetlb_shm_group = 0
vm.laptop_mode = 0
vm.legacy_va_layout = 0
vm.lowmem_reserve_ratio = 32 32 32 0
vm.max_map_count = 65530
vm.memory_failure_early_kill = 0
vm.memory_failure_recovery = 1
vm.min_free_kbytes = 36200
vm.mmap_min_addr = 65536
vm.mmap_rnd_bits = 18
vm.mmap_rnd_compat_bits = 11
vm.nr_hugepages = 0
vm.nr_overcommit_hugepages = 0
vm.oom_dump_tasks = 1
vm.oom_kill_allocating_task = 0
vm.overcommit_kbytes = 0
vm.overcommit_memory = 1
vm.overcommit_ratio = 50 = 3
vm.page_lock_unfairness = 5
vm.panic_on_oom = 1
vm.percpu_pagelist_high_fraction = 0
vm.stat_interval = 1
vm.swappiness = 10
vm.user_reserve_kbytes = 128364
vm.vfs_cache_pressure = 50
vm.watermark_boost_factor = 15000
vm.watermark_scale_factor = 10

Technology – Cost, Risk, and Reward

As a trusted decision maker in the technology space, I am constantly evaluating the following three things. Cost, risk, and reward. When choosing the right technology to solve a problem, not evaluating all three areas is, at best, wasteful – and at worst, exceedingly risky.

Take for example the tried and true example of telephone service. What is the cost, risk, and reward profile of a standard copper-delivered (POTS) telephone service to a company? Well, the cost is relatively high, the risk is relatively low, and the reward is relatively low – it does what it’s supposed to do, and not more. But if we consider VoIP, now we have a lower cost, a higher risk (Internet issues may influence quality or availability), and higher reward (flexible vendor choice, mobility options, etc.). Risk typically increases as cost decreases for different solutions. If cost is the primary driver of a solution, then risk will likely be higher. When stability is the driving factor for a project or application, expect to pay a premium cost.

Whenever choosing technology for your company, always remember the three main decision points – cost, risk, reward – and choose wisely.

Technology Matters

I recently had a candid conversation with one of our marketing executives, and I was shocked to learn her perspective on IT’s purpose within the company. The conversation was about web site performance. I was talking about how so many of the company’s objectives require IT, including her objectives to improve the web site’s performance. She had been working directly with our internal development team for over a year to get a project rolling to improve our web site’s performance with no traction and no success. Immediately after learning about her objective, I offered a solution. Since our developers were unable to prioritize a project for her, I suggested that we put a layer of caching in front of the site to meet her performance objectives without having our developers modify a single line of code. I explained to her how IT is entrenched in every aspect of the company, charged with the goal of making everyone more efficient and finding creative solutions to the company’s technical challenges. She had no idea that we could help her. She confessed to me that she thought that IT was just the people who sets up her new monitors and replaces her laptops every few years. Very sad.

The saddest thing, is that this marketing executive’s perspective is not an exception, it’s a common belief among other executives as well. By not knowing the role of IT, she excluded us from the conversation and paid for it with a year of lost clicks, lost leads, and lost revenue.

If a company was viewed as Amtrak, then the IT department would be Grand Central Station – the hub in which all paths cross. If a company was viewed as a car, then IT would be the oil and the gas (or batteries). You get the idea. IT is the force that keeps the company productive, on track, online, and competitive. Innovation starts here. If you think IT just sets up your monitors and then goes back to their caves, you are naive and it is very likely that you will struggle to succeed.

Advice – Learn the importance of IT and leverage your IT team to help you succeed! At a minimum, keep them informed. Any modern company without IT representation with a seat at the table, is destined for failure.

Creating a GlusterFS Cluster for VMs

The best GlusterFS layout that I have found to work with VMs is a distributed replicated cluster with shards enabled. I use laptops with 2 drives in each one. Let’s say we have 4 laptops with 2 drives on each one, we would do something like what I have listed below.

First, let’s create a partition on each disk. Use fdisk /dev/sda and fdisk /dev/sdb to create a partition. Now, each disk needs to be formatted as XFS. We format with the command.

mkfs.xfs -i size=512 /dev/sda1
mkfs.xfs -i size=512 /dev/sdb1
mkdir /mnt/disk1
mkdir /mnt/disk2

Now we can use blkid to list the UUID of sda1 and sdb1 so that we can add it to fstab. My fstab looks something like this (your UUIDs will be different. The allocsize is set to pre-allocate files 64MB at a time to limit fragmentation and improve performance. The noatime is to prevent the access time attribute being set every time the file is touched – this is to improve performance as well. The nofail is to prevent the system from not booting in the event of a disk failure.

UUID=3edc7ec8-303a-42c6-9937-16ef37068c72 /mnt/disk2 xfs defaults,allocsize=64m,noatime,nofail 0 1
UUID=b8906693-27ba-466b-9c39-8066aa765d2e /mnt/disk1 xfs defaults,allocsize=64m,noatime,nofail 0 1

Now I did something funky with my fstab because I wanted to mount my bricks under the volname so that I could have different volumes on the same disks. So I added these lines to my fstab (my volname is “prod”).

/mnt/disk1/prod/brick1 /mnt/gluster/prod/brick1 none bind 0 0
/mnt/disk2/prod/brick2 /mnt/gluster/prod/brick2 none bind 0 0
mkdir -p /mnt/gluster/prod/brick1
mkdir -p /mnt/gluster/prod/brick2
mount /mnt/disk1 mount /mnt/disk2
mkdir -p /mnt/disk1/prod/brick1
mkdir -p /mnt/disk2/prod/brick2

Now we can mount everything else.

mount -a

Make sure that everything is mounted properly.

df -h /mnt/gluster/prod/brick1

Make sure that you see /dev/sda1 next to it. If not, just reboot and fstab will mount everything appropriately.

Now let’s create the gluster cluster.

gluster volume create prod replica 2 gluster1:/mnt/gluster/prod/brick1 gluster2:/mnt/gluster/prod/brick1 gluster3:/mnt/gluster/prod/brick1 gluster4:/mnt/gluster/prod/brick1 gluster1:/mnt/gluster/prod/brick2 gluster2:/mnt/gluster/prod/brick2 gluster3:/mnt/gluster/prod/brick2 gluster4:/mnt/gluster/prod/brick2

By specifying the order as we did, we ensure that server gluster1 and server gluster2 are paired up with each other, and gluster3 and gluster4 are paired up with each other. If we did both bricks on gluster1 successively, then we would be unable to sustain a failure of the gluster1 node. So we alternate servers. This also improves performance.

Now, the only thing left is to tune some parameters meant for VMs. For each parameter below, we will use a command like the following:

gluster volume set prod storage.fips-mode-rchecksum on

Here are my options:

performance.readdir-ahead: on
performance.client-io-threads: on
nfs.disable: on
transport.address-family: inet
cluster.granular-entry-heal: on 20
features.shard-block-size: 64MB
client.event-threads: 4
server.event-threads: 4 full
cluster.shd-max-threads: 8
cluster.shd-wait-qlength: 10000
server.allow-insecure: on
features.shard: on
cluster.server-quorum-type: server
cluster.quorum-type: auto
network.remote-dio: on
cluster.eager-lock: enable off off
performance.quick-read: off
cluster.locking-scheme: granular
performance.low-prio-threads: 32
cluster.choose-local: off
storage.fips-mode-rchecksum: on
config.transport: tcp

Now we can fire up the gluster cluster with the following command.

gluster volume start prod

Create a UEFI bootable ISO on Debian

First thing’s first… let’s get a quick overview of what’s need, what goes where, and what to expect.

In order to make an ISO bootable, you need an .img file. Many tutorials call this efiboot.img. It’s basically a FAT formatted file that contains a specific folder structure and a specially named executable that UEFI will run. The folder structure should have just one file in it located in /efi/boot/bootx64.efi. The bootx64.efi file is the boot code, so it can be any boot loader that generates it. I like Grub, so that’s what my instructions below do.

Now you may wonder – if you use Grub as the bootx64.efi, then where the heck does it get grub.cfg from? It’s pretty plain and simple – it’s in /boot/grub/grub.cfg just as it would be in a regular filesystem. Except, /boot/grub/grub.cfg lives outside the .img file — it lives along-side it, really.

Ok, now that that’s out of the way, let’s get started. You need some packages first.

sudo apt update
sudo apt install grub-efi-amd64 grub-efi-amd64-bin grub-imageboot grub-legacy grub-pc-bin grub-pc

Now let’s create a rescue image, which will do the heavy lifting for us. Then we will mount it to a temp folder.

grub-mkrescue -o grub.iso
mkdir /tmp/grubcd
mount -o loop grub.iso /tmp/grubcd

Now let’s create the folder where we want our ISO to live. Then we will copy the contents of grub.iso to that folder.

mkdir myiso
cp -avR /tmp/grubcd/* myiso/

Next, let’s create a grub.cfg file so that you can tell it where and what to load. You will want to edit this and not leave it blank.

touch myiso/boot/grub/grub.cfg

Lastly, copy your vmlinuz and initrd files to myiso, along with any other files you need/want. Then you can create your iso with something like xorriso.

cd myiso
xorriso -volid "CDROM" -as mkisofs -isohybrid-mbr isohdpfx.bin -b isolinux.bin -no-emul-boot -boot-load-size 4 -boot-info-table -eltorito-alt-boot -e grub.img -isohybrid-gpt-basdat -o ../mycd.iso ./

Arch on Chromebook


pacman -Sy
pacman -S chromium xorg-server connman enlightenment rxvt-unicode autocutsel; # Note: Choose the noto fonts when prompted
systemctl enable connman

Touchpad support:

# Example xorg.conf.d snippet that assigns the touchpad driver
# to all touchpads. See xorg.conf.d(5) for more information on
# InputClass.
# DO NOT EDIT THIS FILE, your distribution will likely overwrite
# it when updating. Copy (and rename) this file into
# /etc/X11/xorg.conf.d first.
# Additional options may be added in the form of
#   Option "OptionName" "value"
Section "InputClass"
        Identifier "touchpad catchall"
        Driver "synaptics"
        MatchIsTouchpad "on"
# This option is recommend on all Linux systems using evdev, but cannot be
# enabled by default. See the following link for details:
#       MatchDevicePath "/dev/input/event*"

Section "InputClass"
        Identifier "touchpad ignore duplicates"
        MatchIsTouchpad "on"
        MatchOS "Linux"
        MatchDevicePath "/dev/input/mouse*"
        Option "Ignore" "on"

# This option enables the bottom right corner to be a right button on clickpads
# and the right and middle top areas to be right / middle buttons on clickpads
# with a top button area.
# This option is only interpreted by clickpads.
Section "InputClass"
        Identifier "Default clickpad buttons"
        MatchDriver "synaptics"
        Option "TapButton1" "1"
        Option "TapButton2" "3"
        Option "TapButton3" "2"
        Option "SoftButtonAreas" "50% 0 82% 0 0 0 0 0"
        Option "SecondarySoftButtonAreas" "58% 0 0 15% 42% 58% 0 15%"

# This option disables software buttons on Apple touchpads.
# This option is only interpreted by clickpads.
Section "InputClass"
        Identifier "Disable clickpad buttons on Apple touchpads"
        MatchProduct "Apple|bcm5974"
        MatchDriver "synaptics"
        Option "SoftButtonAreas" "0 0 0 0 0 0 0 0"

xterm / rxvt customizations

! Perl extension config
URxvt.perl-ext-common: default,selection-to-clipboard
URxvt.perl-ext: tabbed
! Any scripts placed here will override global ones with the same name

!-- Xft settings -- !
!Xft.dpi:        96
!Xft.antialias:  true
!Xft.rgba:       rgb
!Xft.hinting:    true
!Xft.hintstyle:  hintfull

! Tabbed extension configuration
URxvt.tabbed.tabbar-fg: 8
URxvt.tabbed.tabbar-bg: 0    15    8 false

URxvt*depth: 32
URxvt*background: rgba:0000/0000/0200/c800
URxvt*font: xft:NotoSansMono-Medium:size=18:antialias=true


%sudo ALL=(ALL)NOPASSWD:/usr/bin/Xorg
%sudo ALL=(ALL)NOPASSWD:/usr/local/bin/Xorg

Uncomment the line in /etc/sudoers to look like this:

%wheel ALL=(ALL) ALL

Add user myuser (change to the name of your user) to wheel.

usermod -aG wheel myuser

Create a file /usr/bin/startx-custom

sudo /usr/bin/Xorg &
sleep 2
export DISPLAY=:0
/usr/bin/autocutsel -fork
if [ `pidof Xorg` ]; then
        /usr/bin/enlightenment_start && killall -s KILL Xorg
        echo "No Xorg running... not starting E17"

Make it executable.

chmod a+x /usr/bin/startx-custom

For a high resolution screen like on the Samsung Chromebook Plus, you may want to scale Chromium. To do this, add the following.


If DNS is not working right, adjust /etc/nsswitch.conf to look like this:

hosts: files mymachines myhostname dns resolve [!UNAVAIL=return] dns

Fail-To-Ban (Lite) – EdgeRouter

Here’s how to create a fail-to-ban type of functionality on an EdgeRouter completely using BASH, without installing any 3rd party packages. We are going to create a single script and add a scheduled job to run it. That’s all there is to it.

Step 1
Run the following

vi /config/scripts/fail-to-ban

Now we need to turn off auto indent before copying and pasting the script below. Type the following:

:set noai

Now paste the following script into the file:



NOW=`date '+%s'`

isip() {
  if [ $(echo $IP | sed 's/[^.]//g' | awk '{print length; }' 2> /dev/null) -eq 3 ]; then

fail2ban() {
        # echo failing $IP with count $COUNT and lastcount $LASTCOUNT
        EXISTS=`nice -19 iptables -n -L | grep $IP | wc -l`
        IS_LOCAL=`echo $IP | grep -E '^10\.|192\.168|127\.' | wc -l`
        if [ $EXISTS -gt 0 ]; then
                # echo "IP $IP is already blocked"
        elif [ $IS_LOCAL -eq 1 ]; then
                # echo "IP is local IP.  Not blocking"
    if [ ! "$IP" == "" ]; then
                  # echo "Blocking IP $IP after $COUNT abuses."
                  iptables -I INPUT 1 -j DROP -s $IP
                  echo "`date`:$IP:$NEWCOUNT:$COUNT:BLOCKED" >> /tmp/banned.log

updateList() {
        NOW=`date '+%s'`
        sed -i /tmp/ip-list.log -e "s/:"$IP":"$LASTCOUNT".*$/:"$IP":"$COUNT":"$NOW"/"

updateTime() {
  NOW=`date '+%s'`
  sed -i /tmp/ip-list.log -e "s/:"$IP":"$LASTCOUNT".*$/:"$IP":"$LASTCOUNT":"$NOW"/"

showList() {
  if [ ! "$LIST" == "" ]; then  
        	echo "$DESCRIPTION"
        	for i in `echo "$LIST"`                                                                       
        	        BIP=$(echo $i | sed -e 's/:.*$//')                                                           
        	        BCOUNT=$(echo $i | sed -e 's/^.*://')                                                        
      if [ ! "$BIP" == "" ]; then
                    echo $BIP $BCOUNT                                                                            

checkExpired() {
  BLOCKED=$(nice -19 iptables -L INPUT -n | grep "^DROP" | sed -e 's/^.*--  //' | sed -e 's/ .*$//')
  for i in `grep -e "$BLOCKED" /tmp/ip-list.log`                                                                                                                                              
          IP=`echo $i | cut -d':' -f2`                                                                                                                                         
          isip $IP                                                                                                                                                             
          COUNT=`echo $i | cut -d':' -f3`                                                                                                                                      
          LASTACTION=`echo $i | cut -d':' -f4`                                                                                                                                 
          if [ $((NOW-LASTACTION)) -gt $BLOCKSECS ] && [ ! "$IP" == "" ] && [ $ISIP -eq 1 ] && [ $COUNT -lt $PERMBAN ]; then                                                   
                  LINE=`nice -19 iptables -L -n --line-numbers | grep "$IP" | cut -d' ' -f1`                                                                                            
                  if [ ! "$LINE" == "" ]; then                                                                                                                                 
                          echo "Removing block on $IP"                                                                                                                         
        echo "$(date):$IP:UNBLOCKED" >> /tmp/banned.log
                          # EXPIRED_BLOCK+=",$IP"                                                                                                                              
                          echo iptables -D INPUT $LINE                                                                                                                         
                          iptables -D INPUT $LINE                                                                                                                              

if [ ! -f /tmp/ip-list.log ]; then
        touch /tmp/ip-list.log

echo -n "" > /tmp/
for i in `grep "^$(date +%Y%m%d):" /tmp/ip-list.log`
  if [ ! "$i" == "" ]; then
    echo $i >> /tmp/
mv /tmp/ /tmp/ip-list.log

# Do some checking to see if the logs actually changed
if [ -f /tmp/this-run ]; then
  mv /tmp/this-run /tmp/last-run
  touch /tmp/last-run
ls -1 --full-time /var/log/auth.log > /tmp/this-run
CHANGE=$(diff /tmp/last-run /tmp/this-run | wc -l)
if [ $CHANGE -eq 0 ]; then
  echo "No change since last run"

IPLIST=`nice -19 grep failure /var/log/auth.log | grep rhost | sed -e 's/^.*rhost=//' | sed -e 's/ .*$//' | sort | uniq -c | sed -e 's/^ *//' | sed -e 's/ /:/' | grep -E ":[0-9.]*$" | sed -e "s/^\(.*\)$/$(date +%Y%m%d):\1/"`

for i in `echo "$IPLIST"`
  #echo $i
        COUNT=`echo $i | cut -d':' -f2`
        IP=`echo $i | cut -d':' -f3`
  DATE=`echo $i | cut -d':' -f1`
  isip $IP
        LASTCOUNT=`cat /tmp/ip-list.log | grep ":$IP:" | cut -d':' -f3`
        ELAPSED=`cat /tmp/ip-list.log | grep ":$IP:" | cut -d':' -f4 | sed -e 's/\n//g'`
        if [ "$COUNT" == "" ]; then
        if [ "$LASTCOUNT" == "" ]; then
        if [ ! "$LASTCOUNT" == "" ] && [ $LASTCOUNT -eq 0 ] && [ $ISIP -eq 1 ]; then
                echo "$DATE:$IP:$COUNT:$NOW" >> /tmp/ip-list.log
               # echo "Adding $IP to the IP tracking log with count $COUNT"
        if [ $NEWCOUNT -ge $ATTEMPTS ] && [ $ISIP -eq 1 ] && ( [ $ELAPSED -le $INTERVAL ]  ||  [ $COUNT -gt $PERMBAN ] ); then
                if [ $LASTCOUNT -ne 0 ]; then

  elif [ $NEWCOUNT -ge $ATTEMPTS ] && [ $ISIP -eq 1 ]; then
    echo "Updating the timestamp for IP $IP; +$NEWCOUNT since last update"



showList "Blocked | Attempts" "$BLOCKED_ALREADY"
showList "Newly Blocked | Attempts" "$BLOCKED_NOW"
showList "Skipped | Attempts" "$SKIPPED"
showList "Expired" "$EXPIRED_BLOCK"

Now you need to exit and save your changes.

Now let’s make the script executable:

chmod a+x /config/scripts/fail-to-ban

And lastly, we need to add a job to run this periodically and also disable hostname lookups in SSH to make the script’s IP validation work properly and remove a possible DoS vector.

set system task-scheduler task failtoban executable path /config/scripts/fail-to-ban
set system task-scheduler task failtoban interval 1m
set service ssh disable-host-validation

Done! Now your EdgeRouter should be protected against brute force login attacks.