Planet

Last updated: 2026-03-21 18:45:01 UTC

Feds Disrupt IoT Botnets Behind Huge DDoS Attacks

Post by Brian Krebs via Krebs on Security »

The U.S. Justice Department joined authorities in Canada and Germany in dismantling the online infrastructure behind four highly disruptive botnets that compromised more than three million Internet of Things (IoT) devices, such as routers and web cameras. The feds say the four botnets — named Aisuru, Kimwolf, JackSkid and Mossad — are responsible for a series of recent record-smashing distributed denial-of-service (DDoS) attacks capable of knocking nearly any target offline.

Image: Shutterstock, @Elzicon.

The Justice Department said the Department of Defense Office of Inspector General’s (DoDIG) Defense Criminal Investigative Service (DCIS) executed seizure warrants targeting multiple U.S.-registered domains, virtual servers, and other infrastructure involved in DDoS attacks against Internet addresses owned by the DoD.

The government alleges the unnamed people in control of the four botnets used their crime machines to launch hundreds of thousands of DDoS attacks, often demanding extortion payments from victims. Some victims reported tens of thousands of dollars in losses and remediation expenses.

The oldest of the botnets — Aisuru — issued more than 200,000 attacks commands, while JackSkid hurled at least 90,000 attacks. Kimwolf issued more than 25,000 attack commands, the government said, while Mossad was blamed for roughy 1,000 digital sieges.

The DOJ said the law enforcement action was designed to prevent further infection to victim devices and to limit or eliminate the ability of the botnets to launch future attacks. The case is being investigated by the DCIS with help from the FBI’s field office in Anchorage, Alaska, and the DOJ’s statement credits nearly two dozen technology companies with assisting in the operation.

“By working closely with DCIS and our international law enforcement partners, we collectively identified and disrupted criminal infrastructure used to carry out large-scale DDoS attacks,” said Special Agent in Charge Rebecca Day of the FBI Anchorage Field Office.

Aisuru emerged in late 2024, and by mid-2025 it was launching record-breaking DDoS attacks as it rapidly infected new IoT devices. In October 2025, Aisuru was used to seed Kimwolf, an Aisuru variant which introduced a novel spreading mechanism that allowed the botnet to infect devices hidden behind the protection of the user’s internal network.

On January 2, 2026, the security firm Synthient publicly disclosed the vulnerability Kimwolf was using to propagate so quickly. That disclosure helped curtail Kimwolf’s spread somewhat, but since then several other IoT botnets have emerged that effectively copy Kimwolf’s spreading methods while competing for the same pool of vulnerable devices. According to the DOJ, the JackSkid botnet also sought out systems on internal networks just like Kimwolf.

The DOJ said its disruption of the four botnets coincided with “law enforcement actions” conducted in Canada and Germany targeting individuals who allegedly operated those botnets, although no further details were available on the suspected operators.

In late February, KrebsOnSecurity identified a 22-year-old Canadian man as a core operator of the Kimwolf botnet. Multiple sources familiar with the investigation told KrebsOnSecurity the other prime suspect is a 15-year-old living in Germany.

Top

150 MB Minimal FreeBSD Installation

Post by Vermaden via 𝚟𝚎𝚛𝚖𝚊𝚍𝚎𝚗 »

Sometimes an article starts with a simple question or a single message on one of the available social platforms. This is the case today with this article where @ooberober asked:

How small can the root get in its most minimal setup, as far as you know, with pkgbase(8)?

Tiny OS installations sounds similar to me to tiny houses … not sure why.

I wrote about many/most aspects of PKGBASE in the Brave New PKGBASE World article … but not the installation size. I checked one of my FreeBSD 15.0 PKGBASE installations and replied right away – 450 MB of disk space … but maybe that can be pushed further?

What I am gonna share with You today is unsupported – probably unrecomended – you may render your system broken. Only use it in test environment – as I did in a fresh Bhyve VM. You have been warned.

To make you interested – this is what I was able to achieve with FreeBSD 15.0-RELEASE with PKGBASE installation.

root@space:~ # df -m
Filesystem          1M-blocks Used Avail Capacity  Mounted on
zroot/ROOT/default       7437  150  7287     2%    /
devfs                       0    0     0     0%    /dev
/dev/gpt/efiboot0         255    1   254     0%    /boot/efi
zroot/home               7287    0  7287     0%    /home
zroot/tmp                7287    0  7287     0%    /tmp
zroot/usr/ports          7287    0  7287     0%    /usr/ports
zroot/var/log            7287    0  7287     0%    /var/log
zroot/var/mail           7287    0  7287     0%    /var/mail
zroot/usr/src            7287    0  7287     0%    /usr/src
zroot/var/tmp            7287    0  7287     0%    /var/tmp
zroot/var/audit          7287    0  7287     0%    /var/audit
zroot/var/crash          7287    0  7287     0%    /var/crash

About 150 MB of physically used space. I used zstd-19 ZFS compression at bsdinstall(8) installer.

I started with simple plain FreeBSD 15.0-RELEASE installation with these options:

Select Installation Type
- Packages (Tech Preview)

Network or Offline Installation
- Offline (Limited Packages)

Partitioning
- Auto (ZFS)
  - ZFS Configuration
    - ZFS Pool Options: -O compression=zstd-19

Select System Components
- [X] base

If you select that [X] base option you will end up with 450 MB of physical space used and these sets installed.

root@space:~ # pkg sets | grep :
FreeBSD-set-base-15.0:
FreeBSD-set-devel-15.0:
FreeBSD-set-minimal-15.0:
FreeBSD-set-optional-15.0:

As we are working with PKGBASE world – for a start will will try to make sure pkg(8) preserves our ‘cuts’.

root@space:~ # pkg info -d pkg
pkg-2.4.2:
        FreeBSD-libarchive-15.0 (libarchive.so.7)
        FreeBSD-clibs-15.0 (libc.so.7)
        FreeBSD-clibs-15.0 (libm.so.5)
        FreeBSD-clibs-15.0 (libthr.so.3)
        FreeBSD-openssl-lib-15.0 (libcrypto.so.35)
        FreeBSD-openssl-lib-15.0 (libssl.so.35)
        FreeBSD-runtime-15.0 (libelf.so.2)
        FreeBSD-runtime-15.0 (libjail.so.1)
        FreeBSD-runtime-15.0 (libutil.so.10)
        FreeBSD-zlib-15.0 (libz.so.6)

So in theory these are the PKGBASE packages we must preserve to keep pkg(8) alive.

root@space:~ # pkg info -d pkg | awk '{print $1}' | sed 1d | sort -u
FreeBSD-clibs-15.0
FreeBSD-libarchive-15.0
FreeBSD-openssl-lib-15.0
FreeBSD-runtime-15.0
FreeBSD-zlib-15.0

root@space:~ # pkg info -d pkg | awk '{print $1}' | sed 1d | sort -u | while read I; do echo -n "$I: "; pkg info ${I} | grep set; done | column -t
FreeBSD-clibs-15.0:        set  :  minimal,minimal-jail
FreeBSD-libarchive-15.0:   set  :  optional,optional-jail
FreeBSD-openssl-lib-15.0:  set  :  optional,optional-jail
FreeBSD-runtime-15.0:      set  :  minimal,minimal-jail
FreeBSD-zlib-15.0:         set  :  minimal,minimal-jail

As we want to keep both FreeBSD-set-base and FreeBSD-set-minimal sets – that gives us these:

root@space:~ # pkg info -d pkg | awk '{print $1}' | sed 1d | sort -u | while read I; do echo -n "$I: "; pkg info ${I} | grep set; done | column -t | grep -v minimal
FreeBSD-libarchive-15.0:   set  :  optional,optional-jail
FreeBSD-openssl-lib-15.0:  set  :  optional,optional-jail

So in theory we can remove all packages of FreeBSD-set-devel and FreeBSD-set-minimal sets and just keep these FreeBSD-libarchive and FreeBSD-openssl-lib packages to have working pkg(8) right?

 

Unfortunately no – as master Yoda once said – “No, there is another.” – and the answer is that both FreeBSD-xz-lib and FreeBSD-libucl packages to not keep you waiting – but we will come to that later.

For a start lets check FreeBSD-libarchive and FreeBSD-openssl-lib packages.

root@space:~ # pkg info -d FreeBSD-set-devel-15.0 | wc -l
      70

root@space:~ # pkg info -d FreeBSD-set-devel-15.0 | grep -v -e FreeBSD-libarchive-15 -e FreeBSD-openssl-lib-15 | wc -l
      70

root@space:~ # pkg info -d FreeBSD-set-optional-15.0 | wc -l
      93

root@space:~ # pkg info -d FreeBSD-set-optional-15.0 | grep -v -e FreeBSD-libarchive-15 -e FreeBSD-openssl-lib-15 | wc -l
      91

So they are both in FreeBSD-set-optional set.

We can now lock needed ones.

root@space:~ # pkg lock -y FreeBSD-libarchive
Locking FreeBSD-libarchive-15.0

root@space:~ # pkg lock -y FreeBSD-openssl-lib
Locking FreeBSD-openssl-lib-15.0

We should also create a ‘backup’ ZFS Boot Environment before we potentially break our system.

root@space:~ # bectl create backup

Now … if we start deleting these – we will come to this point below.

(...)
Installed packages to be REMOVED:
        FreeBSD-xz-lib: 15.0

Number of packages to be removed: 1
[1/1] Deinstalling FreeBSD-xz-lib-15.0...
[1/1] Deleting files for FreeBSD-xz-lib-15.0: 100%
ld-elf.so.1: Shared object "liblzma.so.5" not found, required by "libarchive.so.7"
ld-elf.so.1: Shared object "liblzma.so.5" not found, required by "libarchive.so.7"
ld-elf.so.1: Shared object "liblzma.so.5" not found, required by "libarchive.so.7"
(...)

Same with the other one.

(...)
Installed packages to be REMOVED:
        FreeBSD-libucl: 15.0

Number of packages to be removed: 1
[1/1] Deinstalling FreeBSD-libucl-15.0...
[1/1] Deleting files for FreeBSD-libucl-15.0: 100%
ld-elf.so.1: Shared object "libprivateucl.so.1" not found, required by "pkg"
ld-elf.so.1: Shared object "libprivateucl.so.1" not found, required by "pkg"
ld-elf.so.1: Shared object "libprivateucl.so.1" not found, required by "pkg"
(...)

… and yes – we do not want that.

The pkg-static(8) will still work fortunately.

The disappointing thing about so called experience is that you get it just after you needed it – and its no different this time.

So – we also need to lock these:

  • FreeBSD-xz-lib
  • FreeBSD-libucl
  • FreeBSD-libcasper

Lets do that.

root@space:~ # pkg lock -y FreeBSD-xz-lib
Locking FreeBSD-xz-lib-15.0

root@space:~ # pkg lock -y FreeBSD-libucl
Locking FreeBSD-libucl-15.0

root@space:~ # pkg lock -y FreeBSD-libcasper
Locking FreeBSD-libcasper-15.0

Now – we can remove these sets and packages.

Lets just make sure that lock really works.

root@space:~ # pkg delete -fy FreeBSD-libarchive
Checking integrity... done (0 conflicting)
The following package(s) are locked or vital and may not be removed:

        FreeBSD-libarchive

1 packages requested for removal: 1 locked, 0 missing

Yep. Works. Lets go hunting.

root@space:~ # pkg lock -l
Currently locked packages:
FreeBSD-libarchive-15.0
FreeBSD-libucl-15.0
FreeBSD-openssl-lib-15.0
FreeBSD-xz-lib-15.0
FreeBSD-libcasper-15.0

root@space:~ # pkg info -d FreeBSD-set-devel-15.0    | tr ':' ' ' | while read PKG; do pkg delete -fy ${PKG}; done

root@space:~ # pkg info -d FreeBSD-set-optional-15.0 | tr ':' ' ' | while read PKG; do pkg delete -fy ${PKG}; done

After these operations … and needed locks for three mentioned packages You should have a working FreeBSD 15.0-RELEASE system that takes about 150 MB of space.

The downside? The pkg(8) will still try to reinstall most/all the removed packages during upgrade.

root@space:~ # pkg upgrade
Updating FreeBSD-ports repository catalogue...
Fetching meta.conf: 100%    179 B   0.2 k/s    00:01    
Fetching data: 100%   10 MiB 514.3 k/s    00:21    
Processing entries: 100%
FreeBSD-ports repository update completed. 36679 packages processed.
Updating FreeBSD-ports-kmods repository catalogue...
Fetching meta.conf: 100%    179 B   0.2 k/s    00:01    
Fetching data: 100%   35 KiB  35.7 k/s    00:01    
Processing entries: 100%
FreeBSD-ports-kmods repository update completed. 239 packages processed.
Updating FreeBSD-base repository catalogue...
Fetching meta.conf: 100%    179 B   0.2 k/s    00:01    
Fetching data: 100%   80 KiB  81.5 k/s    00:01    
Processing entries: 100%
FreeBSD-base repository update completed. 496 packages processed.
All repositories are up to date.
Updating database digests format: 100%
Checking for upgrades (9 candidates): 100%
Processing candidates (9 candidates): 100%
The following 79 package(s) will be affected (of 0 checked):

New packages to be INSTALLED:
        FreeBSD-atf-dev: 15.0 [FreeBSD-base]
        FreeBSD-audit-dev: 15.0 [FreeBSD-base]
        FreeBSD-blocklist-dev: 15.0 [FreeBSD-base]
        FreeBSD-bluetooth-dev: 15.0 [FreeBSD-base]
        FreeBSD-bmake: 15.0 [FreeBSD-base]
        FreeBSD-bootloader-dev: 15.0 [FreeBSD-base]
        FreeBSD-bsnmp-dev: 15.0 [FreeBSD-base]
        FreeBSD-bzip2-dev: 15.0 [FreeBSD-base]
        FreeBSD-clang: 15.0 [FreeBSD-base]
        FreeBSD-clang-dev: 15.0 [FreeBSD-base]
        FreeBSD-clibs-dev: 15.0 [FreeBSD-base]
        FreeBSD-clibs-lib32: 15.0 [FreeBSD-base]
        FreeBSD-ctf: 15.0 [FreeBSD-base]
        FreeBSD-ctf-dev: 15.0 [FreeBSD-base]
        FreeBSD-ctf-lib: 15.0 [FreeBSD-base]
        FreeBSD-devmatch-dev: 15.0 [FreeBSD-base]
        FreeBSD-dtrace-dev: 15.0 [FreeBSD-base]
        FreeBSD-efi-tools-dev: 15.0 [FreeBSD-base]
        FreeBSD-fetch-dev: 15.0 [FreeBSD-base]
        FreeBSD-flua-dev: 15.0 [FreeBSD-base]
        FreeBSD-kerberos-dev: 15.0 [FreeBSD-base]
        FreeBSD-kyua: 15.0 [FreeBSD-base]
        FreeBSD-lib9p-dev: 15.0 [FreeBSD-base]
        FreeBSD-libarchive-dev: 15.0 [FreeBSD-base]
        FreeBSD-libbegemot-dev: 15.0 [FreeBSD-base]
        FreeBSD-libblocksruntime-dev: 15.0 [FreeBSD-base]
        FreeBSD-libbsdstat-dev: 15.0 [FreeBSD-base]
        FreeBSD-libcasper-dev: 15.0 [FreeBSD-base]
        FreeBSD-libcompat-dev: 15.0 [FreeBSD-base]
        FreeBSD-libcompiler_rt-dev: 15.0 [FreeBSD-base]
        FreeBSD-libcuse-dev: 15.0 [FreeBSD-base]
        FreeBSD-libdwarf-dev: 15.0 [FreeBSD-base]
        FreeBSD-libevent1-dev: 15.0 [FreeBSD-base]
        FreeBSD-libexecinfo-dev: 15.0 [FreeBSD-base]
        FreeBSD-libipt-dev: 15.0 [FreeBSD-base]
        FreeBSD-libldns-dev: 15.0 [FreeBSD-base]
        FreeBSD-libmagic-dev: 15.0 [FreeBSD-base]
        FreeBSD-libmilter-dev: 15.0 [FreeBSD-base]
        FreeBSD-libpathconv-dev: 15.0 [FreeBSD-base]
        FreeBSD-librpcsec_gss-dev: 15.0 [FreeBSD-base]
        FreeBSD-librss-dev: 15.0 [FreeBSD-base]
        FreeBSD-libsqlite3-dev: 15.0 [FreeBSD-base]
        FreeBSD-libthread_db-dev: 15.0 [FreeBSD-base]
        FreeBSD-libucl-dev: 15.0 [FreeBSD-base]
        FreeBSD-libvgl-dev: 15.0 [FreeBSD-base]
        FreeBSD-libvmmapi-dev: 15.0 [FreeBSD-base]
        FreeBSD-libyaml-dev: 15.0 [FreeBSD-base]
        FreeBSD-lld: 15.0 [FreeBSD-base]
        FreeBSD-lldb: 15.0 [FreeBSD-base]
        FreeBSD-lldb-dev: 15.0 [FreeBSD-base]
        FreeBSD-local-unbound-dev: 15.0 [FreeBSD-base]
        FreeBSD-mtree: 15.0 [FreeBSD-base]
        FreeBSD-natd-dev: 15.0 [FreeBSD-base]
        FreeBSD-ncurses-dev: 15.0 [FreeBSD-base]
        FreeBSD-netmap-dev: 15.0 [FreeBSD-base]
        FreeBSD-openssl-dev: 15.0p2 [FreeBSD-base]
        FreeBSD-pf-dev: 15.0 [FreeBSD-base]
        FreeBSD-pmc-dev: 15.0 [FreeBSD-base]
        FreeBSD-runtime-dev: 15.0 [FreeBSD-base]
        FreeBSD-set-devel: 15.0 [FreeBSD-base]
        FreeBSD-set-optional: 15.0 [FreeBSD-base]
        FreeBSD-smbutils-dev: 15.0 [FreeBSD-base]
        FreeBSD-sound-dev: 15.0 [FreeBSD-base]
        FreeBSD-ssh-dev: 15.0 [FreeBSD-base]
        FreeBSD-tcpd-dev: 15.0 [FreeBSD-base]
        FreeBSD-toolchain: 15.0 [FreeBSD-base]
        FreeBSD-toolchain-dev: 15.0 [FreeBSD-base]
        FreeBSD-ufs-dev: 15.0 [FreeBSD-base]
        FreeBSD-utilities-dev: 15.0 [FreeBSD-base]
        FreeBSD-xz-dev: 15.0 [FreeBSD-base]
        FreeBSD-yp: 15.0 [FreeBSD-base]
        FreeBSD-zfs-dev: 15.0 [FreeBSD-base]
        FreeBSD-zlib-dev: 15.0 [FreeBSD-base]

Installed packages to be UPGRADED:
        FreeBSD-devmatch: 15.0 -> 15.0p2 [FreeBSD-base]
        FreeBSD-kernel-generic: 15.0 -> 15.0p2 [FreeBSD-base]
        FreeBSD-openssl: 15.0 -> 15.0p2 [FreeBSD-base]
        FreeBSD-rescue: 15.0 -> 15.0p2 [FreeBSD-base]
        FreeBSD-runtime: 15.0 -> 15.0p2 [FreeBSD-base]
        FreeBSD-utilities: 15.0 -> 15.0p1 [FreeBSD-base]

Number of packages to be installed: 73
Number of packages to be upgraded: 6

The process will require 458 MiB more space.
190 MiB to be downloaded.

Proceed with this action? [y/N]: n

We can overcome that by removing the base/FreeBSD-set-devel dependency on base/FreeBSD-set-base.

This way pkg(8) would not want to reinstall base/FreeBSD-set-devel next time during upgrade.

Before we do that we will backup the pkg(8) SQLite database at /var/db/pkg/local.sqlite file.

root@space:~ # cp /var/db/pkg/local.sqlite /var/db/pkg/local.sqlite.BACKUP

root@space:~ # pkg shell

sqlite> .header on

sqlite> .mode column

sqlite> .tables
annotation           pkg_annotation       pkg_provides       
categories           pkg_categories       pkg_requires       
config_files         pkg_conflicts        pkg_script         
deps                 pkg_directories      pkg_shlibs_provided
directories          pkg_groups           pkg_shlibs_required
files                pkg_licenses         pkg_users          
groups               pkg_lock             provides           
licenses             pkg_lock_pid         requires           
lua_script           pkg_lua_script       script             
option               pkg_option           shlibs             
option_desc          pkg_option_default   users              
packages             pkg_option_desc    

The two important tables that are in our interest are deps and packages here.

This is how relation between these two tables look in DBeaver tool.

These are the dependencies.

sqlite> select * from deps where origin like '%-set-%';
origin                     name                  version  package_id
-------------------------  --------------------  -------  ----------
base/FreeBSD-set-devel     FreeBSD-set-devel     15.0     208       
base/FreeBSD-set-minimal   FreeBSD-set-minimal   15.0     208       
base/FreeBSD-set-optional  FreeBSD-set-optional  15.0     208       

sqlite> select origin,package_id from deps where origin = 'base/FreeBSD-set-devel';
origin                  package_id
----------------------  ----------
base/FreeBSD-set-devel  208       

sqlite> select id,name from packages where id = 208;
id   name            
---  ----------------
208  FreeBSD-set-base

Now we will DELETE the base/FreeBSD-set-devel dependency on base/FFreeBSD-set-base set.

sqlite> delete from deps where origin = "base/FreeBSD-set-devel";

sqlite> .quit

It can also be done non-interactively this way below.

root@space: # echo 'delete from deps where origin = "base/FreeBSD-set-devel";' | pkg shell

Lets check how pkg(8) upgrade will now behave.

root@space:~ # pkg upgrade
Updating FreeBSD-ports repository catalogue...
FreeBSD-ports repository is up to date.
Updating FreeBSD-ports-kmods repository catalogue...
FreeBSD-ports-kmods repository is up to date.
Updating FreeBSD-base repository catalogue...
FreeBSD-base repository is up to date.
All repositories are up to date.
Checking for upgrades (9 candidates): 100%
Processing candidates (9 candidates): 100%
The following 10 package(s) will be affected (of 0 checked):

New packages to be INSTALLED:
        FreeBSD-bmake: 15.0 [FreeBSD-base]
        FreeBSD-ctf-lib: 15.0 [FreeBSD-base]
        FreeBSD-set-optional: 15.0 [FreeBSD-base]
        FreeBSD-yp: 15.0 [FreeBSD-base]

Installed packages to be UPGRADED:
        FreeBSD-devmatch: 15.0 -> 15.0p2 [FreeBSD-base]
        FreeBSD-kernel-generic: 15.0 -> 15.0p2 [FreeBSD-base]
        FreeBSD-openssl: 15.0 -> 15.0p2 [FreeBSD-base]
        FreeBSD-rescue: 15.0 -> 15.0p2 [FreeBSD-base]
        FreeBSD-runtime: 15.0 -> 15.0p2 [FreeBSD-base]
        FreeBSD-utilities: 15.0 -> 15.0p1 [FreeBSD-base]

Number of packages to be installed: 4
Number of packages to be upgraded: 6

The process will require 1 MiB more space.
63 MiB to be downloaded.

Proceed with this action? [y/N]: n

Better.

The pkg(8) no longer wants to reinstall the base/FreeBSD-set-devel set and its packages.

You can also check other Less Known pkg(8) Features here.

So there You have it.

Minimal unsupported FreeBSD 15.0-RELEASE installation in PKGBASE fashion.

To be honest … I was thinking that PKGBASE would be more ‘modular’ to say the least … but seems that packaging/upgrades/integrity are really its goals – not modularity … and in todays world of ‘gigabytes’ and ‘terabytes’ the disk space savings are not that important. Its 2026 … we do not put entire OS into 3.5 inch floppy anymore … nowhere to read or write such floppy either.

Some additional space showing commands.

root@space:~ # df -h /
Filesystem            Size    Used   Avail Capacity  Mounted on
zroot/ROOT/default    7.3G    150M    7.1G     2%    /

root@space:~ # bectl list
BE      Active Mountpoint Space Created
default NR     /          150M  2026-01-31 18:23

List of installed pkg(8) packages.

root@space:~ # pkg info -as | sort -k 2 -h
FreeBSD-set-base-15.0          0.00B
FreeBSD-powerd-15.0            23.3KiB
FreeBSD-certctl-15.0           26.1KiB
FreeBSD-nuageinit-15.0         40.8KiB
FreeBSD-devmatch-15.0          43.8KiB
FreeBSD-hyperv-tools-15.0      43.9KiB
FreeBSD-at-15.0                48.9KiB
FreeBSD-ufs-lib-15.0           50.5KiB
FreeBSD-bzip2-15.0             50.8KiB
FreeBSD-fwget-15.0             54.5KiB
FreeBSD-resolvconf-15.0        56.4KiB
FreeBSD-newsyslog-15.0         57.4KiB
FreeBSD-periodic-15.0          61.6KiB
FreeBSD-bzip2-lib-15.0         80.7KiB
FreeBSD-syslogd-15.0           82.5KiB
FreeBSD-devd-15.0              87.1KiB
FreeBSD-cron-15.0              91.2KiB
FreeBSD-zlib-15.0              99.4KiB
FreeBSD-fetch-15.0             110KiB
FreeBSD-libcasper-15.0         142KiB
FreeBSD-libucl-15.0            142KiB
FreeBSD-dhclient-15.0          150KiB
FreeBSD-efi-tools-15.0         152KiB
FreeBSD-xz-lib-15.0            198KiB
FreeBSD-rc-15.0                406KiB
FreeBSD-geom-15.0              510KiB
FreeBSD-ncurses-15.0           527KiB
FreeBSD-pkg-bootstrap-15.0     579KiB
FreeBSD-ppp-15.0               579KiB
FreeBSD-ncurses-lib-15.0       586KiB
FreeBSD-ufs-15.0               619KiB
FreeBSD-mandoc-15.0            638KiB
FreeBSD-vi-15.0                775KiB
FreeBSD-zoneinfo-15.0          828KiB
FreeBSD-libarchive-15.0        885KiB
FreeBSD-caroot-15.0            1.03MiB
FreeBSD-zfs-15.0               1.25MiB
FreeBSD-vt-data-15.0           1.54MiB
FreeBSD-wpa-15.0               1.59MiB
FreeBSD-kernel-man-15.0        2.63MiB
FreeBSD-clibs-15.0             3.79MiB
FreeBSD-zfs-lib-15.0           4.58MiB
FreeBSD-bootloader-15.0        6.50MiB
FreeBSD-openssl-lib-15.0       7.26MiB
FreeBSD-runtime-15.0           8.81MiB
FreeBSD-firmware-iwm-15.0      13.8MiB
FreeBSD-rescue-15.0            19.0MiB
FreeBSD-locales-15.0           24.3MiB
FreeBSD-utilities-15.0         48.6MiB
pkg-2.4.2                      53.4MiB
FreeBSD-kernel-generic-15.0    153MiB

On of the biggest packages is … pkg(8) – and about 70% of that space is the static pkg-static(8) binary.

root@space:~ # pkg info -l pkg | sed 1d | xargs du -smc | sort -n | tail -10
1       /usr/local/share/man/man8/pkg-upgrade.8.gz
1       /usr/local/share/man/man8/pkg-version.8.gz
1       /usr/local/share/man/man8/pkg-which.8.gz
1       /usr/local/share/man/man8/pkg.8.gz
1       /usr/local/share/zsh/site-functions/_pkg
2       /usr/local/lib/libpkg.a
2       /usr/local/lib/libpkg.so.4
2       /usr/local/sbin/pkg
13      /usr/local/sbin/pkg-static
19      total

Lets check the largest one – the FreeBSD kernel package.

This system currently only has zfs.ko kernel module loaded for ZFS support.

root@space:~ # kldstat
Id Refs Address                Size Name
 1   10 0xffffffff80200000  1f4daa0 kernel
 2    1 0xffffffff8214e000   620c10 zfs.ko

Lets check the largest kernel modules we can live without.

root@space:~ # cd /boot/kernel

root@space:/boot/kernel # du -skc qat*     \
                                  iw*      \
                                  if_*     \
                                  pmspcv*  \
                                  ispfw*   \
                                  sfxge*   \
                                  ice_ddp* \
                                  hpt27xx* \
                                  cam*     \
                                  ipl*     \
                                  ocs_fc*  \
                                  sctp*
577     qat_200xx_fw.ko
337     qat_4xxx_fw.ko
185     qat_api.ko
69      qat_c2xxx.ko
157     qat_c2xxxfw.ko
577     qat_c3xxx_fw.ko
1413    qat_c4xxx_fw.ko
829     qat_c62x_fw.ko
201     qat_common.ko
625     qat_dh895xcc_fw.ko
105     qat_hw.ko
57      qat.ko
161     iw_cxgbe.ko
133     iwi_bss.ko
129     iwi_ibss.ko
133     iwi_monitor.ko
597     iwm3160fw.ko
537     iwm3168fw.ko
637     iwm7260fw.ko
565     iwm7265Dfw.ko
693     iwm7265fw.ko
1089    iwm8000Cfw.ko
953     iwm8265fw.ko
1385    iwm9000fw.ko
1385    iwm9260fw.ko
233     iwn1000fw.ko
233     iwn100fw.ko
425     iwn105fw.ko
437     iwn135fw.ko
433     iwn2000fw.ko
441     iwn2030fw.ko
109     iwn4965fw.ko
233     iwn5000fw.ko
229     iwn5150fw.ko
281     iwn6000fw.ko
413     iwn6000g2afw.ko
417     iwn6000g2bfw.ko
289     iwn6050fw.ko
41      if_ae.ko
49      if_age.ko
57      if_alc.ko
49      if_ale.ko
9       if_ath.ko
29      if_aue.ko
33      if_axe.ko
33      if_axge.ko
141     if_axp.ko
229     if_bce.ko
37      if_bfe.ko
85      if_bge.ko
221     if_bnxt.ko
49      if_bridge.ko
117     if_bwi.ko
181     if_bwn.ko
1269    if_bxe.ko
45      if_cas.ko
5       if_cc.ko
5       if_ccv.ko
37      if_cdce.ko
29      if_cdceem.ko
25      if_cue.ko
173     if_cxgb.ko
469     if_cxgbe.ko
45      if_cxgbev.ko
5       if_cxl.ko
5       if_cxlv.ko
53      if_dc.ko
21      if_disc.ko
21      if_edsc.ko
237     if_em.ko
145     if_ena.ko
25      if_enc.ko
57      if_enic.ko
25      if_epair.ko
45      if_et.ko
29      if_fwe.ko
37      if_fwip.ko
49      if_fxp.ko
37      if_gem.ko
33      if_gif.ko
41      if_gre.ko
89      if_gve.ko
97      if_iavf.ko
25      if_ic.ko
377     if_ice.ko
1       if_igb.ko
73      if_igc.ko
25      if_infiniband.ko
25      if_ipheth.ko
57      if_ipw.ko
69      if_iwi.ko
781     if_iwlwifi.ko
133     if_iwm.ko
109     if_iwn.ko
185     if_ix.ko
209     if_ixl.ko
1       if_ixlv.ko
157     if_ixv.ko
49      if_jme.ko
29      if_kue.ko
61      if_lagg.ko
33      if_le.ko
33      if_lge.ko
813     if_lio.ko
61      if_malo.ko
93      if_mana.ko
29      if_me.ko
25      if_mgb.ko
33      if_mos.ko
57      if_msk.ko
37      if_muge.ko
105     if_mwl.ko
69      if_mxge.ko
33      if_my.ko
49      if_nfe.ko
41      if_nge.ko
25      if_ntb.ko
113     if_oce.ko
65      if_otus.ko
53      if_ovpn.ko
1317    if_qlnxe.ko
1221    if_qlnxev.ko
69      if_qlxgb.ko
1921    if_qlxgbe.ko
85      if_qlxge.ko
121     if_ral.ko
57      if_re.ko
37      if_rl.ko
69      if_rsu.ko
509     if_rtw88.ko
1441    if_rtw89.ko
65      if_rtwn_pci.ko
97      if_rtwn_usb.ko
29      if_rue.ko
65      if_rum.ko
101     if_run.ko
37      if_sge.ko
37      if_sis.ko
49      if_sk.ko
37      if_smsc.ko
37      if_ste.ko
33      if_stf.ko
37      if_stge.ko
33      if_sume.ko
133     if_ti.ko
45      if_tuntap.ko
53      if_uath.ko
29      if_udav.ko
49      if_upgt.ko
53      if_ural.ko
45      if_ure.ko
33      if_urndis.ko
69      if_urtw.ko
41      if_vge.ko
37      if_vlan.ko
41      if_vmx.ko
41      if_vr.ko
37      if_vte.ko
61      if_vtnet.ko
49      if_vxlan.ko
97      if_wg.ko
93      if_wpi.ko
45      if_xl.ko
69      if_zyd.ko
2033    pmspcv.ko
1329    ispfw.ko
565     sfxge.ko
165     ice_ddp.ko
601     hpt27xx.ko
529     cam.ko
453     ipl.ko
517     ocs_fc.ko
477     sctp.ko
41555   total

Another about 40 MB that can be removed assuming you do not need these kernel modules.

That means we can get even under 100 MB if we remove 40 MB of unused kernel modules and 13 MB of pkg-static(8) binary.

Also keep in mind that You have entire static FreeBSD Rescue System available under /rescue dir.

Not sure how (if anyhow) this helps – but still – wanted to share.

Let me know – if you tried it – if I missed any additional dependencies that should be kept.

EOF
Top

maintenance script changes

Post by Dan Langille via Dan Langille's Other Diary »

After I wrote the script to put up a maintenance page for my websites, I came up with two more things to display on the page:

  • Timestamp for start of maintenance
  • Reason for maintenance

In this post:

  • FreeBSD 15.0

The new script

The new script is invoked like this:

[14:11 r720-02-proxy01 dvl ~] % ~/bin/offline dev.freshports.org "$(date -R -v +2H)" "Offline for database update"
<html>
<head>
<title>Error 503 Service Unavailable</title>
<style>
<style>
      body { text-align: center; padding: 20px; font: 25px Helvetica, sans-serif; color: #efe8e8; background-color:#2e2929}
      @media (min-width: 768px){
        body{ padding-top: 150px; }
      }
      h1 { font-size: 50px; }
      h2 { font-size: 35px; }
      h3 { font-size: 28px; }
      article { display: block; text-align: left; max-width: 650px; margin: 0 auto; }
      td { font-size: 25px; line-height: 1.5; margin: 20px 0; }
    </style>
</style>
</head>
<body>
<h1>Server is offline for maintenance</h1>
<h2>503 Service Unavailable</h2>

<h3>Offline for database update</h3>
<table>
<tr>
<td>Started at:</td><td>Wed, 18 Mar 2026 14:12:28 +0000</td>
</tr>
<tr>
<td>Please retry after:</td><td> Wed, 18 Mar 2026 16:12:28 +0000</td>
</tr>
</table>
</body>
</html>
overwrite /usr/local/www/offline/dev.freshports.org-maintenance.html? (y/n [n]) 

The original script is in my previous post. The new script is here:

[14:11 r720-02-proxy01 dvl ~] % cat ~/bin/offline                   
#!/bin/sh

# Usage: ~/bin/offline foo.bar "$(date -R -v +2H)"
# see -v option on man date: [y|m|w|d|H|M|S]

website=$1
retryafter=$2
reason=$3

OFFLINE_DIR="/usr/local/www/offline"

# slight santization
hostname=$(basename $website)

start=$(date -R)

tmpfile=$(mktemp /tmp/offline.XXXXXX)

cat << EOF >> $tmpfile
<html>
<head>
<title>Error 503 Service Unavailable</title>
<style>
<style>
      body { text-align: center; padding: 20px; font: 25px Helvetica, sans-serif; color: #efe8e8; background-color:#2e2929}
      @media (min-width: 768px){
        body{ padding-top: 150px; }
      }
      h1 { font-size: 50px; }
      h2 { font-size: 35px; }
      h3 { font-size: 28px; }
      article { display: block; text-align: left; max-width: 650px; margin: 0 auto; }
      td { font-size: 25px; line-height: 1.5; margin: 20px 0; }
    </style>
</style>
</head>
<body>
<h1>Server is offline for maintenance</h1>
<h2>503 Service Unavailable</h2>

<h3>${reason}</h3>
<table>
<tr>
<td>Started at:</td><td>${start}</td>
</tr>
<tr>
<td>Please retry after:</td><td> ${retryafter}</td>
</tr>
</table>
</body>
</html>
EOF

cat $tmpfile

sudo chown root:wheel $tmpfile
sudo chmod 0644       $tmpfile

sudo mv -i $tmpfile "${OFFLINE_DIR}/${hostname}-maintenance.html"

# We remove the tmpfile file if we answer NO to the overwrite of the mv
sudo rm -f $tmpfile

The results

The resulting webpage looks something like this (what you see in between the horizontal lines):


Server is offline for maintenance

503 Service Unavailable

Offline for database update

Started at: Wed, 18 Mar 2026 14:12:28 +0000
Please retry after: Wed, 18 Mar 2026 16:12:28 +0000

Hope that helps.

Top

Expat 2.7.5 released, includes security fixes

Post by Sebastian Pipping via Hartwork Blog »

For readers new to Expat:

libexpat is a fast streaming XML parser. Alongside libxml2, Expat is one of the most widely used software libre XML parsers written in C, specifically C99. It is cross-platform and licensed under the MIT license.

Expat 2.7.5 was released earlier today. The key motivation for cutting a release and doing so now is three security fixes:

The first NULL pointer dereference was reported and fixed by Francesco Bertolaccini of Trail of Bits with help from their AI tool Buttercup.

The infinite loop denial of service issue was uncovered by Google ClusterFuzz through continuesly fuzzing with xml_lpm_fuzzer that Mark Brand of Project Zero and I teamed up on in the past for Expat 2.7.0. Berkay Eren Ürün and I teamed up for analysis and a fix under a 90 day disclosure deadline.

The second NULL pointer dereference was reported by Christian Ng, and he and I teamed up on a fix.

So much for the fixed vulnerabilities. There are also three known unfixed security issues remaining in libexpat, and there is a GitHub issue listing known unfixed security issues in libexpat for anyone interested.

Thanks to everyone who contributed to this release of Expat!

For more details about this release, please check out the change log.

If you maintain Expat packaging, a bundled copy of Expat, or a pinned version of Expat, please update to version 2.7.5. Thank you!

Sebastian Pipping

Top

Build a NAS using FreeBSD on a Raspberry Pi

Post by FreeBSD Foundation via FreeBSD Foundation »

FreeBSD runs on this…

FreeBSD runs on this…

and FreeBSD runs on this…!

It’s easy to get FreeBSD running on a Raspberry Pi. It’s easy to manage multiple hard drives with ZFS. So we were wondering if it’s possible to make a simple NAS out of a Pi…

I’m using the latest 15.0 Release here — the project automatically generates images for the Pi, which makes life much easier. Download the FreeBSD-15.0-RELEASE-arm64-aarch64-RPI.img.xz image from here and transfer it to a micro SD card for the Pi. If you’re already au fait with putting images onto physical media, I’ll not say another word. If you’re new to it though, the ‘dd‘ command on any Unix-a-like operating system is your friend, or a GUI tool like Balena Etcher will also suffice.

Before we boot the Pi, there are a few things we need to be aware of. We all know FreeBSD as a rock solid server operating system, although over the last year or so there has been a lot of work going on to bring that stability to laptops and desktops. WiFi support is getting better, but as it’s not historically the connectivity of choice for servers, the Pi’s WiFi is currently not supported. That’s all to say, you’re going to want to connect your Pi to ethernet.

Secondly, monitoring the Pi’s boot before we have network connectivity is helpful — so plug a monitor in to the Pi’s HDMI port. The Pi I have here (a 4b) didn’t want to display anything initially, and I found (from this helpful Wiki page) that I had to edit config.txt on the boot SD card. Before we do that, one final caveat — FreeBSD doesn’t support the latest Pi 5 very well.  For now it’s better to stick with the Pi4.

The EFI partition is MS-DOS format; mount it (if your OS hasn’t mounted it automatically) and change the [pi] section in config.txt to look like this:

[pi4]
hdmi_safe=0
armstub=armstub8-gic.bin
max_framebuffers=2
hdmi_force_hotplug=1
hdmi_group=1
hdmi_drive=2
hdmi_mode=16

Provided you’ve connected the ethernet to your network, and you have DHCP, it will get an IP address automatically when it boots.

By default the image has a ‘freebsd’ account, with a password of ‘freebsd’, and a root account with a password of … you get the idea. You can only ssh in with the freebsd account though, and su to root.

Let’s build a NAS

I was curious to know if I could make use of a couple of old hard drives I had sitting around, and if I could turn the Pi into a NAS by sharing files over the network. 

Not a recommendation for production

Plugging the USB caddy in and tailing /var/log/messages:

Mar  6 17:04:25 generic kernel: ugen0.4: <ASMedia ASM1156> at usbus0
Mar  6 17:04:25 generic kernel: umass0 on uhub0
Mar  6 17:04:25 generic kernel: umass0: <ASMedia ASM1156, class 0/0, rev 3.20/1.00, addr 3> on usbus0
Mar  6 17:04:25 generic kernel: umass0:  SCSI over Bulk-Only; quirks = 0x0
Mar  6 17:04:25 generic kernel: umass0:0:0: Attached to scbus0
Mar  6 17:04:25 generic kernel: da0 at umass-sim0 bus 0 scbus0 target 0 lun 0
Mar  6 17:04:25 generic kernel: da0: <ASMedia ASM1156 0> Fixed Direct Access SPC-4 SCSI device
Mar  6 17:04:25 generic kernel: da0: Serial Number AAAABBBB0490
Mar  6 17:04:25 generic kernel: da0: 400.000MB/s transfers
Mar  6 17:04:25 generic kernel: da0: 3815447MB (7814037168 512 byte sectors)
Mar  6 17:04:25 generic kernel: da0: quirks=0x2<NO_6_BYTE>
Mar  6 17:04:25 generic kernel: da1 at umass-sim0 bus 0 scbus0 target 0 lun 1
Mar  6 17:04:25 generic kernel: da1: <ASMedia ASM1156 0> Fixed Direct Access SPC-4 SCSI device
Mar  6 17:04:25 generic kernel: da1: Serial Number AAAABBBB0490
Mar  6 17:04:25 generic kernel: da1: 400.000MB/s transfers
Mar  6 17:04:25 generic kernel: da1: 3815447MB (7814037168 512 byte sectors)
Mar  6 17:04:25 generic kernel: da1: quirks=0x2<NO_6_BYTE>

Let’s create our storage pool with those two drives. I think we’ll mirror them:

root@generic:~ # zpool create -f store mirror da0 da1
root@generic:~ # zfs list
NAME    USED  AVAIL  REFER  MOUNTPOINT
store   384K  3.51T    96K  /store

Yes, it’s that easy!

We’ll want that zpool to come back on reboot, so let’s ensure zfs loads:

root@generic:~ # service zfs enable
zfs enabled in /etc/rc.conf

If we want our storage to be Network Attached, the obvious solution is Samba. Network shares over SMB will be available to all manner of operating systems, including Windows and macOS. 

Because this is a fresh installation, the pkg command itself isn’t yet installed. But helpfully, it can bootstrap itself:

root@generic:~ # pkg update
The package management tool is not yet installed on your system.
Do you want to fetch and install it now? [y/N]: y
Bootstrapping pkg from pkg+https://pkg.FreeBSD.org/FreeBSD:15:aarch64/quarterly, please wait...

Now we can install Samba. I’ve chosen the latest version available:

root@generic:~ # pkg install -y samba423
Updating FreeBSD-ports repository catalogue...
FreeBSD-ports repository is up to date.
Updating FreeBSD-ports-kmods repository catalogue...
FreeBSD-ports-kmods repository is up to date.
All repositories are up to date.
Updating database digests format: 100%
The following 82 package(s) will be affected (of 0 checked):

New packages to be INSTALLED:
[snip]

Give that a couple of minutes, depending on your network speed.

Once it’s installed, we’ll need to create Samba’s configuration file, /usr/local/etc/smb4.conf (I just followed the handbook):

[global]
workgroup = FOUNDATION
server string = Pi Samba Version %v
netbios name = RaspberryPi
wins support = Yes
security = user
passdb backend = tdbsam

[store]
path = /store
valid users = freebsd
writable  = yes
browsable = yes
read only = no
guest ok = no
public = no
create mask = 0666
directory mask = 0755

Run testparm to check the configuration is ok:

root@generic:~ # testparm
Load smb config files from /usr/local/etc/smb4.conf
Loaded services file OK.
Weak crypto is allowed by GnuTLS (e.g. NTLM as a compatibility fallback)

Server role: ROLE_STANDALONE

Press enter to see a dump of your service definitions

# Global parameters
[global]
        netbios name = RASPBERRYPI
        security = USER
        server string = Pi Samba Version %v
        wins support = Yes
        workgroup = FOUNDATION
        idmap config * : backend = tdb


[store]
        create mask = 0666
        path = /store
        read only = No
        valid users = freebsd

If you restrict the share to given user(s), you’ll need to create them for Samba with pdbedit:

root@generic:~ # pdbedit -a -u freebsd
new password:
retype new password:
Unix username:        freebsd
NT username:
Account Flags:        [U          ]
User SID:             S-1-5-21-2694234615-196936691-1629692636-1000
Primary Group SID:    S-1-5-21-2694234615-196936691-1629692636-513
Full Name:            FreeBSD User
Home Directory:       \\RASPBERRYPI\freebsd

To have the Pi appearing on the mDNS/Bonjour .local network, I also enable and run the avahi-daemon:

root@generic:~ # service dbus enable && service avahi-daemon enable && service dbus start && service avahi-daemon start
dbus enabled in /etc/rc.conf
avahi_daemon enabled in /etc/rc.conf
Starting dbus.
Starting avahi-daemon.

That’s probably not pertinent in a datacentre. But then again, neither is a storage pool attached to a Raspberry Pi by USB… 😉

Everything looks good enough to enable the share:

root@generic:~ # service samba_server enable
samba_server enabled in /etc/rc.conf
root@generic:~ # service samba_server start
Performing sanity check on Samba configuration: OK
Starting nmbd.
Starting smbd.

Lastly, on a Pi 4, which this example is, I run sysrc ntpdate_enable=”YES” because there’s no realtime clock. You can buy RTC modules for the older Pis, but see previous paragraph.

Let’s see if a computer on the network can see it:

Sorted!

Closing

If we want to disconnect the pool it’s also very simple:

root@generic:~ # service samba_server stop
Stopping smbd.
Waiting for PIDS: 6629.
Stopping nmbd.
Waiting for PIDS: 6622.
root@generic:~ # zpool export store
root@generic:~ # zpool status
no pools available

The USB caddy can be pulled from the running Pi now. And added back later as required:

root@generic:~ # zpool list
no pools available
root@generic:~ # zpool import store
root@generic:~ # zpool list
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
store  3.62T   596K  3.62T        -         -     0%     0%  1.00x    ONLINE  -

It’s back!

This was a quick look at how easy it is to use FreeBSD on a Raspberry Pi, which is a great little device for testing ideas, or learning some operating system fundamentals. It was also a demonstration of just how easy it is to use ZFS at a basic level — with volume and filesystem management built in. Hopefully it’s whetted your appetite to go experiment a little too.


We’re dropping new posts — and videos — for technical topics regularly. So make sure you’re subscribed to the YouTube channel and following this feed in your favorite RSS reader. There’s also the newsletter, if you’d like to receive updates by email.

We’d like this content series to be interactive too — so what would you like to see us cover? What FreeBSD questions can we help you tackle?  Get in touch with your ideas.

The post Build a NAS using FreeBSD on a Raspberry Pi first appeared on FreeBSD Foundation.

Top

Ritter-TD

Post by Bernd Dau via Zockertown: Nerten News »

⚔ Ritter-TD ⚔

Casual Tower-Defense im Fantasy-Look — Single-File, offline spielbar

Goblins marschieren aus der Höhle am linken Kartenrand und wollen die Burg rechts erreichen. Jeder Eindringling kostet ein ❤ Herz. Baue Türme auf dem Grasland (nicht auf dem Weg), schieße die Goblins ab und sammle Gold für neue Türme und Upgrades.

Das Spiel hat jetzt eine schöne Tiefe:

  Frost-Synergie + Runenbrecher als Konter
  - Springer der Türme vereist
  - Baumeister der eine echte zweite Front aufmacht

 

Start

Datei ritter-td.html im Browser öffnen (Firefox / Chromium). Kein Server, keine Downloads – alles läuft lokal. Audio startet nach dem ersten Klick oder Tastendruck.

 

Top

ddclient 4 changes

Post by Dan Langille via Dan Langille's Other Diary »

After a 4-hour power outage today (crews were working on the power lines), my home IP address changed, perhaps for the first time in over a year. I also noticed ddclient was no longer installed on my host. This blog post outlines changes from the original article.

In this post:

Of note, since the previous post:

  • the default location for the configuration file (ddclient.conf) has moved from /usr/local/etc/ to /usr/local/etc/ddclient/
  • The old cache file is no longer compatible: Mar 11 17:58:23 gw01 ddclient[8133]: WARNING: [file /var/tmp/ddclient.cache, line 1]> program version mismatch; ignoring
  • Command line arguments have changed from single-dash to double-dash (e.g. -ip is now –ip) – at first I thought my original documentation was wrong… because I was reading it to get things back up and running.
  • The configuration file format has changed. The configuration for a given client needs to have trailing \ on the line. For example, this worked with
    ddclient 3:

    use=ip
    protocol=dyndns2
    server=members.dyndns.org
    login=foo
    password=bar
    bast.examaple.net
    

    With ddclient 4, this is what works for me:

    use=ip \
    protocol=dyndns2 \
    server=members.dyndns.org \
    login=foo \
    password=bar \
    bast.examaple.net
    

    Side note: I’m no longer using use=ip – that is merely a formatting example.

    I was getting errors such as: update failed: update RR is outside zone (NOTZONE) in my named logs. ddclient itself was showing ddclient: failed closing stdin … error running command when using nsupdate

    FYI, I opened a ddclient issue about that.

I had to update my Ansible playbook to install the configuration file to the new location. Then I ran my update:

[18:39 ansible root /usr/local/etc/ansible] # ansible-playbook gateway.yml --limit=gw01.int.unixathome.org --tags=ddclient


PLAY [gateways] *********************************************************************************************************************************************************************************************************

TASK [Gathering Facts] **************************************************************************************************************************************************************************************************
ok: [gw01.int.unixathome.org]

TASK [ddclient : Install on Debian] *************************************************************************************************************************************************************************************
skipping: [gw01.int.unixathome.org]

TASK [ddclient : Install on RedHat] *************************************************************************************************************************************************************************************
skipping: [gw01.int.unixathome.org]

TASK [ddclient : Install on FreeBSD] ************************************************************************************************************************************************************************************
included: /usr/local/etc/ansible/roles/ddclient/tasks/setup-FreeBSD.yml for gw01.int.unixathome.org

TASK [ddclient : Install ddclient.] *************************************************************************************************************************************************************************************
ok: [gw01.int.unixathome.org]

TASK [ddclient : ansible.builtin.set_fact] ******************************************************************************************************************************************************************************
ok: [gw01.int.unixathome.org]

TASK [ddclient : Install from source] ***********************************************************************************************************************************************************************************
skipping: [gw01.int.unixathome.org]

TASK [ddclient : Configure ddclient] ************************************************************************************************************************************************************************************
included: /usr/local/etc/ansible/roles/ddclient/tasks/configure.yml for gw01.int.unixathome.org

TASK [ddclient : Create the ddclient configuration folder] **************************************************************************************************************************************************************
ok: [gw01.int.unixathome.org]

TASK [ddclient : Create configuration file] *****************************************************************************************************************************************************************************
changed: [gw01.int.unixathome.org]

TASK [ddclient : Create the cache directory] ****************************************************************************************************************************************************************************
ok: [gw01.int.unixathome.org]

TASK [ddclient : Create the systemd file] *******************************************************************************************************************************************************************************
skipping: [gw01.int.unixathome.org]

TASK [ddclient : List some items] ***************************************************************************************************************************************************************************************
ok: [gw01.int.unixathome.org] => {
    "msg": [
        "__ddclient_configuration_directory: /usr/local/etc/ddclient",
        "__ddclient_configuration_location:  /usr/local/etc/ddclient/ddclient.conf",
        "__ddclient_cache_director:          /var/cache/ddclient"
    ]
}

PLAY RECAP **************************************************************************************************************************************************************************************************************
gw01.int.unixathome.org    : ok=9    changed=1    unreachable=0    failed=0    skipped=4    rescued=0    ignored=0   

Exit hooks

I also know the exit hook needs to be updated, because of the command line argument changes. That diff is:

[18:47 ansible root /usr/local/etc/ansible] # svn di host_vars/gw01.int.unixathome.org/freebsd-netif.yaml
Index: host_vars/gw01.int.unixathome.org/freebsd-netif.yaml
===================================================================
--- host_vars/gw01.int.unixathome.org/freebsd-netif.yaml	(revision 3015)
+++ host_vars/gw01.int.unixathome.org/freebsd-netif.yaml	(working copy)
@@ -150,7 +150,7 @@
     - /usr/local/sbin/he-notify.sh
     -    
     - "# configure dynamic dns"
-    - /usr/local/sbin/ddclient -syslog -use=ip -ip="$new_ip_address" -verbose -daemon=0
+    - /usr/local/sbin/ddclient --syslog --use=ip --ip="$new_ip_address" --verbose --daemon=0
     -
     - "# we're done: log it."
     - 'logger "dhclient-exit-hooks is done"'
[18:47 ansible root /usr/local/etc/ansible] # 

That update was pushed top the host via:

[18:47 ansible root /usr/local/etc/ansible] # ansible-playbook gateway.yml --limit=gw01.int.unixathome.org --tags=netif


PLAY [gateways] *********************************************************************************************************************************************************************************************************

TASK [Gathering Facts] **************************************************************************************************************************************************************************************************
ok: [gw01.int.unixathome.org]

TASK [freebsd-netif : install /etc/rc.conf.d/netif] *********************************************************************************************************************************************************************
ok: [gw01.int.unixathome.org]

TASK [freebsd-netif : install scripts] **********************************************************************************************************************************************************************************
ok: [gw01.int.unixathome.org] => (item={'name': 'he-net.sh', 'destdir': '/usr/local/sbin'})
ok: [gw01.int.unixathome.org] => (item={'name': 'he-notify.sh', 'destdir': '/usr/local/sbin'})

TASK [freebsd-netif : install /etc/dhclient-exit-hooks] *****************************************************************************************************************************************************************
changed: [gw01.int.unixathome.org]

TASK [freebsd-netif : List the cloned_interfaces] ***********************************************************************************************************************************************************************
ok: [gw01.int.unixathome.org] => {
    "msg": [
        "cloned_interfaces: vlan2 vlan3 vlan4 vlan7 vlan8 vlan219 tun2 gif0",
        "ip4: [{'ip': 'DHCP', 'nic': 'igc0', 'desc': 'Default'}]"
    ]
}

PLAY RECAP **************************************************************************************************************************************************************************************************************
gw01.int.unixathome.org    : ok=5    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   

The cache file

To stop the compatibility messages, I did this:

[18:50 gw01 dvl /usr/local/etc] % sudo mv /var/tmp/ddclient.cache /var/tmp/ddclient.cache-ddclient-3.11.2

Launching ddclient

To run ddclient, I did this:

[17:54 gw01 dvl ~] % sudo ddclient --syslog --use=ip --ip= 203.0.113.120 --daemon=0 --verbose

where 203.0.113.120 is my current IP address (that’s not really what it was).

Now it’s running:

[19:02 gw01 dvl /usr/local/etc] % ps auwwx | grep ddclient     
root    10788   0.1  0.1   32628 19100  -  Ss   18:07     0:00.65 ddclient - sleeping for 60 seconds (perl)
dvl     17227   0.0  0.0   14164  2680  0  S+   19:02     0:00.00 grep ddclient

And logging:

Mar 11 19:05:51 gw01 ddclient[10788]: SUCCESS: [MyBSDHost]> skipped update because IPv4 address is already set to 203.0.113.120
Mar 11 19:05:51 gw01 ddclient[10788]: SUCCESS: [bast.example.net]> skipped update because IPv4 address is already set to 203.0.113.120
Mar 11 19:05:51 gw01 ddclient[10788]: SUCCESS: [unixathome.example.net]> skipped update because IPv4 address is already set to 203.0.113.120

And…

I won’t know for sure if this is all working until the next time my IP address changes.

I still don’t know how ddclient got removed in the first place.

Top

Valuable News – 2026/03/16

Post by Vermaden via 𝚟𝚎𝚛𝚖𝚊𝚍𝚎𝚗 »

The Valuable News weekly series is dedicated to provide summary about news, articles and other interesting stuff mostly but not always related to the UNIX/BSD/Linux systems. Whenever I stumble upon something worth mentioning on the Internet I just put it here.

Today the amount information that we get using various information streams is at massive overload. Thus one needs to focus only on what is important without the need to grep(1) the Internet everyday. Hence the idea of providing such information ‘bulk’ as I already do that grep(1).

The Usual Suspects section at the end is permanent and have links to other sites with interesting UNIX/BSD/Linux news.

Past releases are available at the dedicated NEWS page.

UNIX

Pwning NetBSD aarch64 (ARM).
https://feyrer.de/NetBSD/bx/blosxom.cgi/nb_20260308_1932.html

Test Driving NetBSD-11.0RC2 on ARM Hardware in VM.
https://feyrer.de/NetBSD/bx/blosxom.cgi/nb_20260308_1626.html

UNIX 2dsh Experimental Shell for Connecting Processes with Multiple Data Streams.
https://tuhs.org/Archive/Documentation/Papers/2dsh.pdf

Linux seccomp Unsafe at Any Speed. [2022]
https://blog.habets.se/2022/03/seccomp-unsafe-at-any-speed.html

FreeBSD Git Weekly: 2026-03-02 to 2026-03-08.
https://freebsd-git-weekly.tarsnap.net/2026-03-02.html

FreeBSD 14.4-RELEASE Now Available.
https://lists.freebsd.org/archives/freebsd-announce/2026-March/000228.html

FreeBSD 14.4-RELEASE Release Notes.
https://freebsd.org/releases/14.4R/relnotes/

FreeBSD 14.4 Review: Most Reliable Unix System Yet.
https://techrefreshing.com/freebsd-14-4-review/

FreeBSD 14.4 Released for Those Not Yet Ready to Move to FreeBSD 15.
https://phoronix.com/news/FreeBSD-14.4-Released

New kjail Release with PKGBASE Support.
https://github.com/Emrion/kjail-pkgbase

Major Update to drm(4) Code in OpenBSD-current.
https://undeadly.org/cgi?action=article;sid=20260310102936

MidnightBSD Bans Users in Brazil and California – Warns More Regions Could Follow.
https://itsfoss.com/news/midnightbsd-age-verification/

OpenBSD-current Moves to 7.9-beta.
https://undeadly.org/cgi?action=article;sid=20260311062921

FreeBSDKit: Swift Package to Write Capability Aware FreeBSD Apps.
https://christiantietze.de/posts/2026/03/freebsdkit-swift-package-write-capability-aware-freebsd-apps/

TinyGate is Lightweight Cross Platform HTTP/HTTPS Reverse Proxy.
https://github.com/sibexico/TinyGate/tree/dev

FreeBSD Jails Orchestrator jrun(8) Now Ready to Use.
https://reddit.com/r/freebsd/comments/1rq6lqm/jrun_the_jail_orchestrator_now_ready_to_use/

AMDGPU Crash with FreeBSD 15.0 on My Laptop – Why and What are Possible Solutions.
https://vincentdelft.be/post/post_20260311

Arkime 6 Open Source Network Analysis/Packet Capture Tool with FreeBSD Support.
https://arkime.com/release-v6

OPNsense 26.1.4 Released.
https://forum.opnsense.org/index.php?topic=51239.0

Wlasny Serwer FreeBSD – Czesc 4 – Certyfikat TSL/SSL. [Polish]
https://linuxiarze.pl/wlasny-serwer-freebsd-cz-4-certyfikat-tsl-ssl/

Wlasny Serwer FreeBSD – Czesc 5 – Serwer FTP. [Polish]
https://linuxiarze.pl/wlasny-serwer-freebsd-cz-5-serwer-ftp/

BSD Router Project 2.1. [Polish]
https://linuxiarze.pl/bsdrp-2-1/

UNIX System – Sun Technical Report. [1985]
https://drive.google.com/file/d/1dW6l6cFAiqTKj3bmTulynKQuOHeHMx0u/view

Delayed Hibernation Comes to OpenBSD/amd64 Laptops.
https://undeadly.org/cgi?action=article;sid=20260312185620

TrueNAS Reboot Loop – VM Load and NVMe That Would Not Stay Seated.
https://blog.cabroneria.com/post/0007_truenas_nvme_reseat_reboot_loop/

FreeBSD Foundationals – ZFS – Last Filesystem You Will Ever Need.
https://blog.hofstede.it/freebsd-foundationals-zfs-the-last-filesystem-youll-ever-need/

Sylve: Bhyve Virtualization and Clustering on FreeBSD.
https://gyptazy.com/blog/sylve-a-proxmox-alike-webui-for-bhyve-on-freebsd/

SpamAssassin for Sendmail on FreeBSD.
https://micski.dk/2026/03/11/spamassassin-for-sendmail-on-freebsd/

How to Install Mullvad VPN with WireGuard on FreeBSD. [2025]
https://micski.dk/2025/10/23/how-to-install-mullvad-vpn-with-wireguard-on-freebsd/

Changing GELI Passphrase/Password on Multiple Disks. [2025]
https://micski.dk/2025/09/02/changing-geli-password-on-multiple-disks/

Fast and Smart fastfind/ff File Search with Fuzzy Matching and Natural Language Queries.
https://github.com/RobertFlexx/fastfind

FreeBSD Home NAS – Part 11 – Extended Monitoring with Additional Exporters.
https://rtfm.co.ua/en/freebsd-home-nas-part-11-extended-monitoring-with-additional-exporters/

FreeBSD Home NAS – Part 12 – Synchronizing Data with Syncthing.
https://rtfm.co.ua/en/freebsd-home-nas-part-12-synchronizing-data-with-syncthing/

FreeBSD Home NAS – Part 13 – Planning Data Storage and Backups.
https://rtfm.co.ua/en/freebsd-home-nas-part-13-planning-data-storage-and-backups/

FreeBSD_Home NAS – Part 14 – Logs with VictoriaLogs and Alerts with VMAlert.
https://rtfm.co.ua/en/freebsd-home-nas-part-14-logs-with-victorialogs-and-alerts-with-vmalert/

FreeBSD 14.4 Released with Better Security/Storage/Cloud Support.
https://ostechnix.com/freebsd-14-4-released/

Switching from Void Linux to FreeBSD.
https://leanghok.bearblog.dev/switching-from-void-linux-to-freebsd/

Tutorial: Write Your Own X11 Bar.
https://leanghok.bearblog.dev/write-your-own-bar/

MidnightBSD Excludes California from Desktop Use Due to Digital Age Assurance Act.
https://ostechnix.com/midnightbsd-excludes-california-digital-age-assurance-act/

GotHub All the Things.
https://x61.sh/log/2026/03/14032026191148-gothub.html

5BSD Forked from FreeBSD.
https://github.com/5BSD

FreeBSD mac_abac(4) Label Based MAC Using Extended Attributes.
https://github.com/5BSD/mac_abac

Keyvault – FreeBSD Kernel Resident Encryption Keys and Capabilites.
https://github.com/5BSD/Keyvault

Convert FreeBSD PKGBASE Installation into Distribution Sets.
https://lists.freebsd.org/archives/freebsd-current/2025-December/009572.html

Linux Firewalls: How to Actually Secure Cloud Server with iptables/nftables/firewalld/ufw.
https://blog.hofstede.it/linux-firewalls-how-to-actually-secure-a-cloud-server-iptables-nftables-firewalld-ufw/

Unlocking Secondary Disks on OpenBSD.
https://blog.thechases.com/posts/bsd/unlocking-secondary-disks/

BSD Now 654: Plasma Rage.
https://www.bsdnow.tv/654

Is OpenBSD… Exotic? Community Member Perspective.
https://pvs-studio.com/en/blog/posts/cpp/1353/

FreeBSD Users: We Need to Talk About Claude Code.
https://stevengharms.com/posts/2026-03-04-freebsd-users-we-need-to-talk-about-claude-code/

Podman is Home Lab Ready on FreeBSD.
https://aumont.fr/posts/podman-freebsd/

Maolan is Open Source Digital Audio Workstation for Linux/FreeBSD.
https://maolan.github.io/

Developers Guide to Generative AI in FreeBSD.
https://delphij.net/temp/ai-guide.html

UNIX/Audio/Video

How to Setup and Configure Bhyve in FreeBSD.
https://youtube.com/watch?v=E47Pd0P58Co

Running AI on FreeBSD (CUDA Problem).
https://youtube.com/watch?v=SXevnsbSAAk

2026-03-11 OpenZFS Production User Call.
https://youtube.com/watch?v=DirKkjgtg4s

2026-03-12 Bhyve Production User Call.
https://youtube.com/watch?v=RILtMsciJfk

FreeBSD as Desktop in 2026 – Surprisingly Good.
https://youtube.com/watch?v=2EFG3BO6oVY

Hardware

AMD Launches Ryzen AI Embedded P100 Series 4/6/8/10/12 Core Models.
https://phoronix.com/news/AMD-Ryzen-Embedded-P100-Series

RISC-V is Slow.
https://marcin.juszkiewicz.com.pl/2026/03/10/risc-v-is-sloooow/

Hisense VIDAA TVs Reportedly Add Unskippable Startup Ads Before Live TV.
https://guru3d.com/story/hisense-vidaa-tvs-reportedly-add-unskippable-startup-ads-before-live-tv/

Life

US/Illinois Joins Age Verification for Operating Systems Bandwagon.
https://youtube.com/watch?v=1MJXRRRyMSU

Valve Just Rejected Government Demands.
https://youtube.com/watch?v=-h2q-3NCbYk

Talent Pipeline is Collapsing. Your Team Will Feel It Next.
https://newsletter.thelongcommit.com/p/the-talent-pipeline-is-collapsing

EU Regulation Review – Entire Corpus of EU Regulation Reviewed by Grok.
https://bettereu.com/

Computer Scientists Caution Against Internet Age Verification Mandates.
https://reason.com/2026/03/04/computer-scientists-caution-against-internet-age-verification-mandates/

I Traced $2B in Nonprofit Grants and 45 States of Lobbying Records Who is Behind Age Berification.
https://reddit.com/r/linux/comments/1rshc1f/i_traced_2_billion_in_nonprofit_grants_and_45/

Other

LibreOffice Criticizes EU Commission over Proprietary XLSX Formats.
https://heise.de/en/news/LibreOffice-criticizes-EU-Commission-over-proprietary-XLSX-formats-11202165.html

LibreOffice 26.2 is Here – Faster and More Polished Office Suite that You Control.
https://blog.documentfoundation.org/blog/2026/02/04/libreoffice-26-2-is-here/

You Are Dumb Security Leader if You Mandate Password Rotation.
https://georgeguimaraes.com/youre-dumb-security-leader-if-you-mandate-password-rotation/

Myrient Archive Tracker.
https://myrient.org/

RSS Still Wins in 2025.
https://jeffmackinnon.com/RSS.html

Expanding (and Sharing) List of Blogs I Follow via RSS.
https://neilzone.co.uk/2024/05/expanding-and-sharing-the-list-of-blogs-i-follow-via-rss/

You Deleted Everything and AWS is Still Charging You.
https://jvogel.me/posts/2026/aws-still-charging-you/

Usual Suspects

BSD Weekly.
https://bsdweekly.com/

DiscoverBSD.
https://discoverbsd.com/

BSDSec.
https://bsdsec.net/

DragonFly BSD Digest.
https://dragonflydigest.com/

FreeBSD Patch Level Table.
https://bokut.in/freebsd-patch-level-table/

FreeBSD End of Life Date.
https://endoflife.date/freebsd

Phoronix BSD News Archives.
https://phoronix.com/linux/BSD

OpenBSD Journal.
https://undeadly.org/

Call for Testing.
https://callfortesting.org/

Call for Testing – Production Users Call.
https://youtube.com/@callfortesting/videos

BSD Now Weekly Podcast.
https://www.bsdnow.tv/

Nixers Newsletter.
https://newsletter.nixers.net/entries.php

BSD Cafe Journal.
https://journal.bsd.cafe/

DragonFly BSD Digest – Lazy Reading – In Other BSDs.
https://dragonflydigest.com

BSDTV.
https://bsky.app/profile/bsdtv.bsky.social

FreeBSD Git Weekly.
https://freebsd-git-weekly.tarsnap.net/

FreeBSD Meetings.
https://youtube.com/@freebsdmeetings

BSDJedi.
https://youtube.com/@BSDJedi/videos

RoboNuggie.
https://youtube.com/@RoboNuggie/videos

GaryHTech.
https://youtube.com/@GaryHTech/videos

Sheridan Computers.
https://youtube.com/@sheridans/videos

82MHz.
https://82mhz.net/

EOF
Top

Hacking openvpn to use syslog with something other than facility = daemon

Post by Dan Langille via Dan Langille's Other Diary »

I don’t see a way to specify the syslog facility for OpenVPN – perhaps I can change that in the code. It would allow logging openvpn to a specific file and being able to rotate that log file. –log-append does not allow for log rotation.

In this post:

  • FreeBSD 15.0
  • OpenVPN 2.6.19

Signals sent to OpenVPN do not affect logging. Thus, I must rely upon syslog and newsyslog to achieve log rotation.

At present, a default install of security/openvpn will results in logs to two files:

  1. /var/log/messages
  2. /var/log/daemon.log

Although disk space is not an issue in this, I prefer to keep OpenVPN logs in one file, and not duplicated. It also reduces the “noise” in /var/log/messages.

It not trivial (or perhaps possible, show me how) to stop these duplicate. I would prefer a better way.

Last night, I thought: why not change the code? It can’t be that hard.

Let’s look at the OpenVPN code.

The syslog open

I found this around line 475 of src/openvpn/error.c:

void
open_syslog(const char *pgmname, bool stdio_to_null)
{
#if SYSLOG_CAPABILITY
    if (!msgfp && !std_redir)
    {
        if (!use_syslog)
        {
            pgmname_syslog = string_alloc(pgmname ? pgmname : PACKAGE, NULL);
            openlog(pgmname_syslog, LOG_PID, LOG_OPENVPN);
            use_syslog = true;

That openlog call, I see it defined on the man page as:

openlog(const char *ident, int logopt, int facility);

facility. Right there.

What is LOG_OPENVPN defined as?

[11:30 pkg01 dvl ~/ports/head/security/openvpn/work/openvpn-2.6.19] % grep -r LOG_OPENVPN * 
src/openvpn/error.c:#ifndef LOG_OPENVPN
src/openvpn/error.c:#define LOG_OPENVPN LOG_DAEMON
src/openvpn/error.c:            openlog(pgmname_syslog, LOG_PID, LOG_OPENVPN);

Looking again in the same file, I find:

#ifndef LOG_OPENVPN
#define LOG_OPENVPN LOG_DAEMON
#endif

That’s it. It also seems if it’s already defined, it will use that definition, not LOG_DAEMON

When running my grep for LOG_DAEMON, I was fortunate that I did that from the port directory. Which meant I also found this::

[11:32 pkg01 dvl ~/ports/head/security/openvpn] % grep LOG_OPENVPN *   
Makefile:.ifdef (LOG_OPENVPN)
Makefile:CFLAGS+=		-DLOG_OPENVPN=${LOG_OPENVPN}
Makefile:.ifdef (LOG_OPENVPN)
Makefile:	@${ECHO} "Building with LOG_OPENVPN=${LOG_OPENVPN}"
Makefile:	@${ECHO} "      LOG_OPENVPN={Valid syslog facility, default LOG_DAEMON}"
Makefile:	@${ECHO} "      EXAMPLE:  make LOG_OPENVPN=LOG_LOCAL6"

Well, that’s interesting.

Rebuilding openvpn

Let’s try adding this to /usr/local/etc/poudriere.d/make.conf:

# can I build openvpn to use syslog with something other than facility=daemon
# re: https://cgit.freebsd.org/ports/tree/security/openvpn/Makefile#n125
LOG_OPENVPN=LOG_LOCAL6

Then I rebuilt:

[11:45 pkg01 dvl /usr/local/etc/poudriere.d] % sudo poudriere bulk -j 150amd64 -p default -z primary -C security/openvpn

Checking the log, I found:

/usr/bin/sed -i.bak 's/-Wsign-compare/-Wno-unknown-warning-option -Wno-sign-compare -Wno-bitwise-instead-of-logical -Wno-unused-function/' /wrkdirs/usr/ports/security/openvpn/work/openvpn-2.6.19/configure
Building with LOG_OPENVPN=LOG_LOCAL6
configure: loading site script /usr/ports/Templates/config.site

On the server

I was testing this on my OpenVPN server. I made these changes to /etc/syslog.conf:

[11:51 gw01 dvl ~] % diff -ruN /etc/syslog.conf~ /etc/syslog.conf
--- /etc/syslog.conf~	2026-03-13 18:57:56.000000000 +0000
+++ /etc/syslog.conf	2026-03-15 11:48:24.178756000 +0000
@@ -5,7 +5,7 @@
 #	may want to use only tabs as field separators here.
 #	Consult the syslog.conf(5) manpage.
 *.err;kern.warning;auth.notice;mail.crit		/dev/console
-*.notice;authpriv.none;kern.debug;lpr.info;mail.crit;news.err;local3.none	/var/log/messages
+*.notice;authpriv.none;kern.debug;lpr.info;mail.crit;news.err;local3.none;local6.none	/var/log/messages
 security.*					/var/log/security
 auth.info;authpriv.info				/var/log/auth.log
 mail.info					/var/log/maillog

Specifically, I added ;local6.none to that line. FYI, I use local3 for ddclient logging.

Then, I created this file: /usr/local/etc/syslog.d/openvpn.conf

local6.*		/var/log/openvpn.log

With those changes in place, I restarted syslogd to pick up those changes:

[11:49 gw01 dvl /usr/local/etc/syslog.d] % sudo service syslogd restart
Stopping syslogd.
Waiting for PIDS: 1651.
Starting syslogd.

Next, I cleared local package cache for the openvpn packages. The new package I built had no change to PORTVERSION, PORTREVISION, or to OPTIONS. Thus, it would not automatically be seen as an upgrade. The lack of a locally cached copy of the package would force a download.

[11:50 gw01 dvl ~] % sudo rm /var/cache/pkg/openvpn-*

The install:

[11:51 gw01 dvl ~] % sudo pkg install -f openvpn
Updating local repository catalogue...
local repository is up to date.
All repositories are up to date.
The following 1 package(s) will be affected (of 0 checked):

Installed packages to be REINSTALLED:
	openvpn-2.6.19 [local]

Number of packages to be reinstalled: 1

639 KiB to be downloaded.

Proceed with this action? [y/N]: y
[1/1] Fetching openvpn-2.6.19: 100%   639 KiB 654.2 kB/s    00:01    
Checking integrity... done (0 conflicting)
[1/1] Reinstalling openvpn-2.6.19...
===> Creating groups
Using existing group 'openvpn'
===> Creating users
Using existing user 'openvpn'
[1/1] Extracting openvpn-2.6.19: 100%
=====
Message from openvpn-2.6.19:

--
Note that OpenVPN now configures a separate user and group "openvpn",
which should be used instead of the NFS user "nobody"
when an unprivileged user account is desired.

It is advisable to review existing configuration files and
to consider adding/changing user openvpn and group openvpn.

After a restart, which is sometimes risky (at least in my mind) given the host is at the other end of town:

[11:51 gw01 dvl ~] % sudo service openvpn restart
Stopping openvpn.
Waiting for PIDS: 1533.
Starting openvpn.

However, despite my concerns about restarting OpenVPN, logging to /var/log/messages and to /var/log/daemon.log stoped. OpenVPN was logging only to one file: /var/log/openvpn.log

Success.

Log rotation

This is how I do log rotation:

[12:34 gw01 dvl ~] % cat /usr/local/etc/newsyslog.conf.d/openvpn.conf 
# Only .conf files /usr/local/etc/newsyslog.conf.d/ are pulled in by newsyslog
#

# logfilename                       [owner:group]   mode count size when   flags [/pid_file] [sig_num]
/var/log/openvpn.log                root:logcheck   640  50    *    $D0    B

Lastly

Despite the title of this bog post, there was no hacking of code, although that was my intention when I start writing.

That logging knob has been there since at least 2009, based on my reading of this commit.

Thank you for coming to my TED talk.

Top

I did a thing

Post by Dag-Erling Smørgrav via May Contain Traces of Bolts »

Anyone who knows me will tell you I’m a car guy. And any car guy (as opposed to a Porsche guy or a Corvette guy or a Miata guy) will tell you that every car guy ought to have owned an Alfa Romeo. The problem is that in order to have owned an Alfa Romeo, you must first own an Alfa Romeo; and then, presumably, either you get rid of it or it gets rid of you, or you both get rid of each other. And until today, I had not yet owned an Alfa Romeo.

This afternoon, I graduated from “has not yet owned an Alfa Romeo” to “currently owns an Alfa Romeo”. Specifically, a 2007 Alfa Romeo 147 1.6 TS. At some point in the future, I will move on from “currently owns an Alfa Romeo” to “has owned an Alfa Romeo”, thereby completing my car guy journey; but that will take a lot of work and a bit of money, although in theory I should be able to flip it for a decent profit.

Yes, it’s red, and yes, it has little Italian flags all over.

Currently planned work:

  • Install “24h Le Mans” sticker on trunk lid for immediate 5 bhp boost
  • Change all fluids and filters
  • Replace rear wiper blade
  • Polish headlights
  • Investigate lifter tick
  • Replace clutch pump and / or follower and / or hydraulic line
  • Replace passenger side window regulator
  • Replace driver side window and mirror controls
  • Replace trunk lid lock actuator
  • Fix driver’s seat release
  • Fix or replace various bits of interior trim
  • Replace gear shift lever knob, possibly with a shorter one
  • Replace whip antenna with either bee sting or shark fin

Maybe I’ll even remember to post updates.

Update: 5 bhp boost unlocked

Close-up of the trunk lid of a red Alfa Romeo 147 1.6 TS sporting a white decal showing the outline of the Circuit de la Sarthe with the text “24h Le Mans”.

Update: of course my Italian car has electrical gremlins.

Inside view of the raised trunk lid of my Alfa Romeo 147 with the lining removed. The trunk lock is disconnected and black and red multimeter probes are inserted into the plug. I am holding the multimeter in my left hand; it is set to 20 V and the display is showing12.04.

Update 2026-03-16: I had low washer nozzle pressure and assumed the pump might need changing. Only after ordering a replacement pump from a breaker did I realize that the real issue was that the washer hose was getting pinched when I closed the hood. Cutting the cable ties that tied it to the hood strut solved the issue.

View of the front of a red Alfa Romeo 147 from right above / next to the passenger window. The hood is open and there are traces of washer fluid on it. A multimeter is propped up against the hood so that it is visible from the driver's seat. A ratchet is lying on top of the engine, which is visible through the gap between the rear edge of the open hood and the body.

Update 2026-03-19: It passed inspection today, with a slew of minor defects but little I wasn’t already aware of and nothing I can’t fix.

Top

Script to generate that maintenance.html file for taking my websites into maintenance mode

Post by Dan Langille via Dan Langille's Other Diary »

In a recent blog post, I showed you how I was taking my websites into maintenance mode. Shortly afterwards, I wrote about how using $server_name can have odd consequences. Today, I’m writing about the script I just created which will create those maintenance.html files.

In this post:

  • FreeBSD 15.0
  • nginx 1.28.2
  • bourne shell

The script

This is the script:

[12:17 r720-02-proxy01 dvl /usr/local/www/offline] % cat ~/bin/offline
#!/bin/sh

# Usage: ~/bin/offline foo.bar "$(date -R -v +2H)"
# see -v option on man date: [y|m|w|d|H|M|S]

website=$1
retryafter=$2

OFFLINE_DIR="/usr/local/www/offline"

# slight santization
hostname=$(basename $website)

tmpfile=$(mktemp /tmp/offline.XXXXXX)

cat << EOF >> $tmpfile
html>
<head>
<title>Error 503 Service Unavailable</title>
</head>
<body>
<h1>503 Service Unavailable</h1>
<p>Server is offline for maintenance.</p>
<p>Please retry after ${retryafter}</p>
</body>
</html>
EOF

cat $tmpfile

sudo chown root:wheel $tmpfile
sudo chmod 0644       $tmpfile

sudo mv -i $tmpfile "${OFFLINE_DIR}/${hostname}-maintenance.html"

rm -f $tmpfile

Yes, the rm is not necessary, since the file has already been moved.

To invoke the script, I do something like this:

% ~/bin/offline dev.freshports.org "$(date -R -v +2H)"
html>
<head>
<title>Error 503 Service Unavailable</title>
</head>
<body>
<h1>503 Service Unavailable</h1>
<p>Server is offline for maintenance.</p>
<p>Please retry after Sat, 14 Mar 2026 14:20:59 +0000</p>
</body>
</html>

In the command, I’m saying it will be offline for two hours.

In my browser, I see this:

503 Service Unavailable

Server is offline for maintenance.

Please retry after Sat, 14 Mar 2026 14:20:59 +0000

With wget, I see:

[12:23 nagios04 dvl ~/tmp] % wget -S https://dev.freshports.org
--2026-03-14 12:24:14--  https://dev.freshports.org/
Resolving dev.freshports.org (dev.freshports.org)... 173.228.145.171, 2610:1c0:2000:11:8870:201b:27b5:f4f2
Connecting to dev.freshports.org (dev.freshports.org)|173.228.145.171|:443... connected.
HTTP request sent, awaiting response... 
  HTTP/1.1 503 Service Temporarily Unavailable
  Server: nginx/1.28.2
  Date: Sat, 14 Mar 2026 12:24:14 GMT
  Content-Type: text/html
  Content-Length: 223
  Connection: keep-alive
  ETag: "69b55317-df"
  Retry-After: 120
2026-03-14 12:24:14 ERROR 503: Service Temporarily Unavailable.

If you check carefully, you’ll see there is a Retry-After in the headers. That differs from the message within the webpage. Why?

Why?

The webpage is constructed from that offline script and is a string in the webpage. Ideally, both that and the header would show the same value. However, I don’t know how to easily do that yet.

Where does the header come from?

I recently modified the nginx configuration for this website. The new bit is the add_header as seen below. The always tag needs to be there. The docs mention it, but I didn’t pick it up. It took a blog post by Claudio Kuenzlerfor me to add it in.

  # but wait, there's more!
  location @maintenance {
    rewrite ^(.*)$ /${maint} break;
    add_header Retry-After "120" always;
  }

It would be nice to update the website configuration to adjust the “120” value to match the webpage.

That sounds like an Ansible solution.

Going back online

Going back online involves removing the file. I figure that’s easy enough to not need a script.

Top

Using variable names in nginx declarations has a price: e.g. ssl_certificate /usr/local/etc/ssl/${server_name}.fullchain.cer;

Post by Dan Langille via Dan Langille's Other Diary »

I recently implemented a fun (to me) and easy solution for taking my web proxy websites offline, either one-by-one, or all-at-once. Today’s post talks about some of the repercussions which followed one-new-thing I tried.

In this post:

  • FreeBSD 15.0
  • nginx 1.28.2
  • I jump between testing the test host and stage host; both had similar issues.

The relevant changes

This is the type of change I started to do. Instead of putting the hostname in the log or certificate declarations, I started using an Nginx variable. Here is a psudeo-diff, based on an ansible template.

-  ssl_certificate_key /usr/local/etc/ssl/{{ item.value.website }}.key;
+  ssl_certificate_key /usr/local/etc/ssl/${server_name}.key;

Once on disk, the actual nginx configuration file changed like this. I’ll do this as a before and after:

This is a snippet of the nginx configuration for the proxy host which sits in front of dev.freshports.org:

...
  server_name dev.freshports.org;
...
  # the logs
  error_log  /var/log/nginx/dev.freshports.org.error.log  info;
  access_log /var/log/nginx/dev.freshports.org.access.log combined;

  # the certificate and key specific to this host.
  ssl_certificate     /usr/local/etc/ssl/dev.freshports.org.fullchain.cer;
  ssl_certificate_key /usr/local/etc/ssl/dev.freshports.org.key;

As you can see, the name dev.freshports.org occurs 5 times. The clever idea I had: don’t use the same hard-coded text N times. Declare it once. Use the variable. This is especially applicable to this situation because this file is actually derived from a template and is used for other websites as well. Why not? It might work. Let’s try it.

So we then had this configuration:

...
  server_name dev.freshports.org;
...
  error_log  /var/log/nginx/${server_name}.error.log  info;
  access_log /var/log/nginx/${server_name}.access.log combined;

  ssl_certificate     /usr/local/etc/ssl/${server_name}.fullchain.cer;
  ssl_certificate_key /usr/local/etc/ssl/${server_name}.key;

The service nginx configtest past, the restart worked. Off we go!

Some time later…

I did some testing, taking the websites offline, etc. All that worked fine.

The next day, I noticed several issues in monitoring: “(Return code of 139 is out of bounds)

I tried testing (on my phone, with WIFI off, so that I hit the proxy and didn’t bypass it by being on my local network), I would get

Safari can’t open the page because it couldn’t establish a secure connection to the server.

logcheck was also spitting out this type of message:

Mar 12 20:03:46 nagios04 kernel: pid 9423 (check_http), jid 0, uid 181: exited on signal 11 (no core dump - other error)
Mar 12 20:03:47 nagios04 kernel: pid 9425 (check_http), jid 0, uid 181: exited on signal 11 (no core dump - other error)
Mar 12 20:03:51 nagios04 kernel: pid 9427 (check_http), jid 0, uid 181: exited on signal 11 (no core dump - other error)

That prompted me to get onto nagios04 and see what was going on.

Trying nagios checks by hand

I wanted to see what Nagios was seeing, so I tried this:

[19:45 nagios04 dvl /usr/local/etc/nagiosql] % /usr/local/libexec/nagios/check_http -H stage.freshports.org -4
HTTP OK: HTTP/1.1 200 OK - 250560 bytes in 0.795 second response time |time=0.795270s;;;0.000000 size=250560B;;;0 

Meaning http was good. Let’s try https:

[19:45 nagios04 dvl /usr/local/etc/nagiosql] % /usr/local/libexec/nagios/check_http -H stage.freshports.org -S -p 443 -C 25 -4 --sni 
CRITICAL - Cannot make SSL connection.
10300596361A0000:error:0A000438:SSL routines:ssl3_read_bytes:tlsv1 alert internal 
error:/usr/src/crypto/openssl/ssl/record/rec_layer_s3.c:916:SSL alert number 80
zsh: segmentation fault  /usr/local/libexec/nagios/check_http -H stage.freshports.org -S -p 443 -C 25 

Ouch. Searching for that SSL alert number 80 did not help. In hindsight, I should have paid more attention to the CRITICAL – Cannot make SSL connection part of the output.

There’s that alert number 80 again.

I kept mucking about with that for a while. And didn’t really get anywhere.

Let’s try more non-nagios stuff on the command line.

Other command line explorations

Hmm. Let’s go to wget:

[11:38 nagios04 dvl ~/tmp] % wget -S test.freshports.org        
Prepended http:// to 'test.freshports.org'
--2026-03-13 11:38:22--  http://test.freshports.org/
Resolving test.freshports.org (test.freshports.org)... 173.228.145.171, 2610:1c0:2000:11:8870:201b:27b5:f4f2
Connecting to test.freshports.org (test.freshports.org)|173.228.145.171|:80... connected.
HTTP request sent, awaiting response... 
  HTTP/1.1 200 OK
  Server: nginx/1.28.2
  Date: Fri, 13 Mar 2026 11:38:23 GMT
  Content-Type: text/html; charset=UTF-8
  Transfer-Encoding: chunked
  Connection: keep-alive
  Vary: Accept-Encoding
  X-Powered-By: PHP/8.3.30
  Strict-Transport-Security: max-age=63072000; includeSubDomains; preload
  X-Frame-Options: DENY
  X-Content-Type-Options: nosniff
  X-XSS-Protection: 1; mode=block
Length: unspecified [text/html]
Saving to: ‘index.html’

index.html                                [   <=>                                                                  ] 285.58K   496KB/s    in 0.6s    

2026-03-13 11:38:23 (496 KB/s) - ‘index.html’ saved [292431]

All good. Next, try https:

[11:38 nagios04 dvl ~/tmp] % wget -S https://test.freshports.org
--2026-03-13 11:41:27--  https://test.freshports.org/
Resolving test.freshports.org (test.freshports.org)... 173.228.145.171, 2610:1c0:2000:11:8870:201b:27b5:f4f2
Connecting to test.freshports.org (test.freshports.org)|173.228.145.171|:443... connected.
OpenSSL: error:0A000438:SSL routines::tlsv1 alert internal error
Unable to establish SSL connection.

tlsv1? Am I using that on my proxy?

ssl_protocols TLSv1.2 TLSv1.1 TLSv1;

Well, yes, I am. I changed that to:

  ssl_protocols TLSv1.2 TLSv1.3;

Then restarted nginx.

No difference. Am I using TLSv1 on the real server, the one behind the proxy? That wouldn’t matter because wget is not talking to that.

So WTF is going on?

openssl time:

[20:27 nagios04 dvl ~/tmp] % openssl s_client -connect stage.freshports.org:443 -servername stage.freshports.org -4 -showcerts
Connecting to 173.228.145.171
CONNECTED(00000003)
1090C55AFA340000:error:0A000438:SSL routines:ssl3_read_bytes:tlsv1 alert internal error:/usr/src/crypto/openssl/ssl/record/rec_layer_s3.c:916:SSL alert number 80
---
no peer certificate available
---
No client certificate CA names sent
Negotiated TLS1.3 group: 
---
SSL handshake has read 7 bytes and written 1553 bytes
Verification: OK
---
New, (NONE), Cipher is (NONE)
Protocol: TLSv1.3
This TLS version forbids renegotiation.
Compression: NONE
Expansion: NONE
No ALPN negotiated
Early data was not sent
Verify return code: 0 (ok)
---

Again, I missed clues that might have led me to the cause sooner.

Always monitor the logs

I should have done this sooner. I should have checked the logs. Looking at this, I saw nothing. Nothing. At All.

Now I changed to using the test host.

[11:46 r720-02-proxy01 dvl ~] % xtail /var/log/nginx/test.freshports.org.*

I keep fetching pages, but nothing appeared in that log.

I went a little wider on my search:

[11:48 r720-02-proxy01 dvl ~] % xtail /var/log/nginx/
*** /var/log/nginx//${server_name}.error.log ***
2026/03/13 11:51:14 [error] 18612#101892: *469 cannot load certificate key "/usr/local/etc/ssl/test.freshports.org.key": BIO_new_file() failed (SSL: error:8000000D:system library::Permission denied:calling fopen(/usr/local/etc/ssl/test.freshports.org.key, r) error:10080002:BIO routines::system lib) while SSL handshaking, client: 203.0.113.46, server: 173.228.145.171:443
  1. that’s not writing to the expected file
  2. permission issues?
[11:51 r720-02-proxy01 dvl ~] % ls -l /usr/local/etc/ssl/test.freshports.org.key
-rw-------  1 root wheel 1679 2018.09.18 13:06 /usr/local/etc/ssl/test.freshports.org.key

nginx has always been able to read this before. Let’s try this daring change:

[11:52 r720-02-proxy01 dvl /usr/local/etc/ssl] % sudo chgrp www test.freshports.org.key                   
[11:52 r720-02-proxy01 dvl /usr/local/etc/ssl] % sudo chmod g=r test.freshports.org.key
[11:52 r720-02-proxy01 dvl /usr/local/etc/ssl] % ls -l test.freshports.org.key
-rw-r-----  1 root www 1679 2018.09.18 13:06 test.freshports.org.key

Now nginx should be able to read that key.

Testing again:

[11:53 nagios04 dvl ~/tmp] % wget -S https://test.freshports.org/
--2026-03-13 11:54:25--  https://test.freshports.org/
Resolving test.freshports.org (test.freshports.org)... 173.228.145.171, 2610:1c0:2000:11:8870:201b:27b5:f4f2
Connecting to test.freshports.org (test.freshports.org)|173.228.145.171|:443... connected.
HTTP request sent, awaiting response... 
  HTTP/1.1 200 OK
  Server: nginx/1.28.2
  Date: Fri, 13 Mar 2026 11:54:26 GMT
  Content-Type: text/html; charset=UTF-8
  Transfer-Encoding: chunked
  Connection: keep-alive
  Vary: Accept-Encoding
  X-Powered-By: PHP/8.3.30
  Strict-Transport-Security: max-age=63072000; includeSubDomains; preload
  X-Frame-Options: DENY
  X-Content-Type-Options: nosniff
  X-XSS-Protection: 1; mode=block
Length: unspecified [text/html]
Saving to: ‘index.html.1’

index.html.1                              [   <=>                                                                  ] 285.58K   629KB/s    in 0.5s    

2026-03-13 11:54:26 (629 KB/s) - ‘index.html.1’ saved [292431]

[11:54 nagios04 dvl ~/tmp] % 

So that problem is “fixed”.

Let’s check with openssl:

[20:27 nagios04 dvl ~/tmp] % openssl s_client -connect stage.freshports.org:443 -servername stage.freshports.org -4 -showcerts
Connecting to 173.228.145.171
CONNECTED(00000003)
depth=2 C=US, O=Internet Security Research Group, CN=ISRG Root X1
verify return:1
depth=1 C=US, O=Let's Encrypt, CN=R13
verify return:1
depth=0 CN=stage.freshports.org
verify return:1
---
Certificate chain
 0 s:CN=stage.freshports.org
   i:C=US, O=Let's Encrypt, CN=R13
   a:PKEY: RSA, 2048 (bit); sigalg: sha256WithRSAEncryption
   v:NotBefore: Jan 31 17:35:14 2026 GMT; NotAfter: May  1 17:35:13 2026 GMT
-----BEGIN CERTIFICATE-----
MIIFDDCCA/SgAwIBAgISBjTyQhJ1Qw1yypiKRRJGZuiMMA0GCSqGSIb3DQEBCwUA
MDMxCzAJBgNVBAYTAlVTMRYwFAYDVQQKEw1MZXQncyBFbmNyeXB0MQwwCgYDVQQD
EwNSMTMwHhcNMjYwMTMxMTczNTE0WhcNMjYwNTAxMTczNTEzWjAfMR0wGwYDVQQD
ExRzdGFnZS5mcmVzaHBvcnRzLm9yZzCCASIwDQYJKoZIhvcNAQEBBQADggEPADCC
AQoCggEBANGjB5rJS9pilMQ2sHJ/v8GZ90tFcaUpP0QTnxWvXrrtyg22AR3ODR65
EGvEbSEBAFVsoMR7thSZL1dZ5D7x3AvB+OjLVnK8KGQ/kuR85qSmNq+a/GxvcNKA
HG5xt9aIGZpii+XKOyhlPMbVHFEeVuh1d+oeTdb0+/ul49rLMtXBA3/FlOXkp46w
3kZLH8tTWrj6kXMqiUKPwhuex50tQHuW//WYqXACjXT2EN1ct105ng6WJkH1FuoB
R0s+EaeArdHQzz/bX4ijJCK/RWaoRrOe2fW69RYVmHT347CXqu4TrASffo9ubq4R
ViRkoSOHUs1zjGHO2WE2w+26ShTIeE8CAwEAAaOCAiwwggIoMA4GA1UdDwEB/wQE
AwIFoDAdBgNVHSUEFjAUBggrBgEFBQcDAQYIKwYBBQUHAwIwDAYDVR0TAQH/BAIw
ADAdBgNVHQ4EFgQUEDI7MQFLUoqoRUzvhEBy0lhyKCAwHwYDVR0jBBgwFoAU56uf
DywzoFPTXk94yLKEDjvWkjMwMwYIKwYBBQUHAQEEJzAlMCMGCCsGAQUFBzAChhdo
dHRwOi8vcjEzLmkubGVuY3Iub3JnLzAfBgNVHREEGDAWghRzdGFnZS5mcmVzaHBv
cnRzLm9yZzATBgNVHSAEDDAKMAgGBmeBDAECATAuBgNVHR8EJzAlMCOgIaAfhh1o
dHRwOi8vcjEzLmMubGVuY3Iub3JnLzEzLmNybDCCAQwGCisGAQQB1nkCBAIEgf0E
gfoA+AB1AJaXZL9VWJet90OHaDcIQnfp8DrV9qTzNm5GpD8PyqnGAAABnBVVG3AA
AAQDAEYwRAIgWEAPKdcqhnu1euGcusHalXmiffO0mcXgSLMHLpM/zi4CIBzJkQ3D
1P1CGplj9/9Gedi4RLPg4hRB19HqqcLM6b5MAH8Apcl4kl1XRheChw3YiWYLXFVk
i30AQPLsB2hR0YhpGfcAAAGcFVUmNwAIAAAFADC1Xt8EAwBIMEYCIQDdemoylksD
1x9+S0e/H17rnEP8ZjDbHuKq21N6VprhrQIhAPNlewYay/bOOZF2ncgX5S7wKvQO
nelQosbZ8N5E7o3hMA0GCSqGSIb3DQEBCwUAA4IBAQCLEyqL71emur08DF6IrFZU
iF+ahfVV/PkVh/6pBtu9wCF/gI1j7R7un5tpZQCnrUfOU6L6/DqRfyxe5PW8BGGH
aTt5MFn7VDZ4lqwngSFFKochXf4GMVeADUBUGMyLSgsOZLxQrMm6+PRNS6u21aBK
OsuWsqC6hKtvZyiXALm1wj4vQLl9lshoB2Y3fEqFKZNJKj9fmA3g2a6sDrjHc9Ny
OkiaWker09nJWfAEdIidchu/2m16201CWpXN7wf3ZXpJNMKBqkn0+mc4XWVTRl8t
oVm+gNWfmAwousv9OWVF+PURt5HOYXYF8cBEZ+kSk41FiizSnksoHf4Qu8WK/NT+
-----END CERTIFICATE-----
 1 s:C=US, O=Let's Encrypt, CN=R13
   i:C=US, O=Internet Security Research Group, CN=ISRG Root X1
   a:PKEY: RSA, 2048 (bit); sigalg: sha256WithRSAEncryption
   v:NotBefore: Mar 13 00:00:00 2024 GMT; NotAfter: Mar 12 23:59:59 2027 GMT
-----BEGIN CERTIFICATE-----
MIIFBTCCAu2gAwIBAgIQWgDyEtjUtIDzkkFX6imDBTANBgkqhkiG9w0BAQsFADBP
MQswCQYDVQQGEwJVUzEpMCcGA1UEChMgSW50ZXJuZXQgU2VjdXJpdHkgUmVzZWFy
Y2ggR3JvdXAxFTATBgNVBAMTDElTUkcgUm9vdCBYMTAeFw0yNDAzMTMwMDAwMDBa
Fw0yNzAzMTIyMzU5NTlaMDMxCzAJBgNVBAYTAlVTMRYwFAYDVQQKEw1MZXQncyBF
bmNyeXB0MQwwCgYDVQQDEwNSMTMwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEK
AoIBAQClZ3CN0FaBZBUXYc25BtStGZCMJlA3mBZjklTb2cyEBZPs0+wIG6BgUUNI
fSvHSJaetC3ancgnO1ehn6vw1g7UDjDKb5ux0daknTI+WE41b0VYaHEX/D7YXYKg
L7JRbLAaXbhZzjVlyIuhrxA3/+OcXcJJFzT/jCuLjfC8cSyTDB0FxLrHzarJXnzR
yQH3nAP2/Apd9Np75tt2QnDr9E0i2gB3b9bJXxf92nUupVcM9upctuBzpWjPoXTi
dYJ+EJ/B9aLrAek4sQpEzNPCifVJNYIKNLMc6YjCR06CDgo28EdPivEpBHXazeGa
XP9enZiVuppD0EqiFwUBBDDTMrOPAgMBAAGjgfgwgfUwDgYDVR0PAQH/BAQDAgGG
MB0GA1UdJQQWMBQGCCsGAQUFBwMCBggrBgEFBQcDATASBgNVHRMBAf8ECDAGAQH/
AgEAMB0GA1UdDgQWBBTnq58PLDOgU9NeT3jIsoQOO9aSMzAfBgNVHSMEGDAWgBR5
tFnme7bl5AFzgAiIyBpY9umbbjAyBggrBgEFBQcBAQQmMCQwIgYIKwYBBQUHMAKG
Fmh0dHA6Ly94MS5pLmxlbmNyLm9yZy8wEwYDVR0gBAwwCjAIBgZngQwBAgEwJwYD
VR0fBCAwHjAcoBqgGIYWaHR0cDovL3gxLmMubGVuY3Iub3JnLzANBgkqhkiG9w0B
AQsFAAOCAgEAUTdYUqEimzW7TbrOypLqCfL7VOwYf/Q79OH5cHLCZeggfQhDconl
k7Kgh8b0vi+/XuWu7CN8n/UPeg1vo3G+taXirrytthQinAHGwc/UdbOygJa9zuBc
VyqoH3CXTXDInT+8a+c3aEVMJ2St+pSn4ed+WkDp8ijsijvEyFwE47hulW0Ltzjg
9fOV5Pmrg/zxWbRuL+k0DBDHEJennCsAen7c35Pmx7jpmJ/HtgRhcnz0yjSBvyIw
6L1QIupkCv2SBODT/xDD3gfQQyKv6roV4G2EhfEyAsWpmojxjCUCGiyg97FvDtm/
NK2LSc9lybKxB73I2+P2G3CaWpvvpAiHCVu30jW8GCxKdfhsXtnIy2imskQqVZ2m
0Pmxobb28Tucr7xBK7CtwvPrb79os7u2XP3O5f9b/H66GNyRrglRXlrYjI1oGYL/
f4I1n/Sgusda6WvA6C190kxjU15Y12mHU4+BxyR9cx2hhGS9fAjMZKJss28qxvz6
Axu4CaDmRNZpK/pQrXF17yXCXkmEWgvSOEZy6Z9pcbLIVEGckV/iVeq0AOo2pkg9
p4QRIy0tK2diRENLSF2KysFwbY6B26BFeFs3v1sYVRhFW9nLkOrQVporCS0KyZmf
wVD89qSTlnctLcZnIavjKsKUu1nA1iU0yYMdYepKR7lWbnwhdx3ewok=
-----END CERTIFICATE-----
---
Server certificate
subject=CN=stage.freshports.org
issuer=C=US, O=Let's Encrypt, CN=R13
---
No client certificate CA names sent
Peer signing digest: SHA256
Peer signature type: rsa_pss_rsae_sha256
Peer Temp Key: ECDH, secp384r1, 384 bits
---
SSL handshake has read 3135 bytes and written 1711 bytes
Verification: OK
---
New, TLSv1.2, Cipher is ECDHE-RSA-AES256-GCM-SHA384
Protocol: TLSv1.2
Server public key is 2048 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
    Protocol  : TLSv1.2
    Cipher    : ECDHE-RSA-AES256-GCM-SHA384
    Session-ID: 4BB275378AD476904EA073B09E7450AB781E2AE79FEA3FEBD8CC1FC96FEA4113
    Session-ID-ctx: 
    Master-Key: 1A677D11EDD6F41A39FB7138B9C53708DA9A57AA4A5B54EF705EC0B77952AC44B41793F4D128A7A754DFA998411A5935
    PSK identity: None
    PSK identity hint: None
    SRP username: None
    Start Time: 1773347304
    Timeout   : 7200 (sec)
    Verify return code: 0 (ok)
    Extended master secret: yes
---

Yes, that is greatly improved.

But wait, the logs

That wget above created this log entry (slightly reformatted for easier reading):

2026/03/13 11:54:26 [crit] 18611#133911: *681 open() "/var/log/nginx/test.freshports.org.access.log" failed (13: Permission denied) while logging 
request, client: 203.0.113.46, server: test.freshports.org, request: "GET / HTTP/1.1", upstream: "https://10.55.0.42:443/", host: 
"test.freshports.org"

The next issue: nginx cannot write to the logs. Like the certificate key, this was not a problem before the configuration change. These are the log files:

[12:00 r720-02-proxy01 dvl ~] % ls -l /var/log/nginx/test.freshports.org.*.log
-rw-r--r--  1 root wheel 12386576 2026.03.12 14:03 /var/log/nginx/test.freshports.org.access.log
-rw-r--r--  1 root wheel  5211625 2026.03.12 14:03 /var/log/nginx/test.freshports.org.error.log

Let’s this this “solution”:

[12:00 r720-02-proxy01 dvl /var/log/nginx] % sudo chgrp www test.freshports.org.access.log test.freshports.org.error.log
[12:00 r720-02-proxy01 dvl /var/log/nginx] % sudo chmod g=rw test.freshports.org.access.log test.freshports.org.error.log
[12:00 r720-02-proxy01 dvl /var/log/nginx] % ls -l test.freshports.org.access.log test.freshports.org.error.log
-rw-rw-r--  1 root www 12386576 2026.03.12 14:03 test.freshports.org.access.log
-rw-rw-r--  1 root www  5211625 2026.03.12 14:03 test.freshports.org.error.log

Those changes did not fix the logging problem. The errors persisted.

What did “fix” it was this:

[12:07 r720-02-proxy01 dvl ~] % ls -ld /var/log/nginx
drwxr-x---  2 root wheel 259 2026.03.11 16:29 /var/log/nginx/
[12:07 r720-02-proxy01 dvl ~] % sudo chgrp www /var/log/nginx
[12:07 r720-02-proxy01 dvl ~] % ls -ld /var/log/nginx        
drwxr-x---  2 root www 259 2026.03.11 16:29 /var/log/nginx/
[12:07 r720-02-proxy01 dvl ~] % 

However, that “broke” my personal log monitoring because I was no longer able to view what was in there (because of the group change from wheel to www above):

[12:08 r720-02-proxy01 dvl /var/log/nginx] % xtail /var/log/nginx//test.freshports.org.*.log     
zsh: no matches found: /var/log/nginx//test.freshports.org.*.log

I could do this:

[12:09 r720-02-proxy01 dvl /var/log/nginx] % sudo xtail /var/log/nginx/test.freshports.org.{access,error}.log 
203.0.113.46 - - [13/Mar/2026:12:10:32 +0000] "GET / HTTP/1.1" 200 292547 "-" "Wget/1.25.0"

My theory on why this happens

I think the .key file problems are related to the dropping of priviledges by nginx. In my previous configuration (before I used $server_name), nginx starts as root, gets access to the file, then drops to the user www.

So why is this a problem when using ${server_name} in the path to the files? My theory is the directive is evaluated when the path is needed, not at startup. By that time, the process no longer has access to that file, because it is running as www.

I also suspect the log file problems are similar in nature.

In short, I think it is evaluated on every request, not at startup. That is why the failures occur.

Changes for good

I have reversed the chmod and chgrp changes I made above when trying to “fix” the problems.

I have reverted my configuration files to not use ${server_name}.

Top

Taking your nginx website offline for maintenance? I have an idea.

Post by Dan Langille via Dan Langille's Other Diary »

From time to time, I need to take an nginx webserver or website offline for whatever reason. I might be migrating the database behind the website, the hardware might be powered off for work, etc.

In my case, these points might help you follow along with what I’m doing:

  • FreeBSD 15
  • nginx-1.28.2
  • there is an nginx proxy in front of the website – this nginx instance has no real content; it uses proxy_pass to get content from other nginx instances
  • I might want to take all the websites offline, or just one – so far, my solution is based on a website-by-website

Sample website configuration

This is a slightly simplified version of a website on the proxy nginx instance:

# As taken from http://kcode.de/wordpress/2033-nginx-configuration-with-includes

server {
  listen 10.0.0.29:80;
  listen 10.0.0.29:443 ssl;
  http2 on;

  # this has stuff like ssl_protocols, ssl_ciphers, etc
  include /usr/local/etc/nginx/includes/ssl-common.inc;

  # this server is a proxy for this host:
  server_name dev.freshports.org;

  # the logs - I should put ${server_name} in there:
  error_log  /var/log/nginx/dev.freshports.org.error.log  info;
  access_log /var/log/nginx/dev.freshports.org.access.log combined;

  # the certificate and key specific to this host. Again, ${server_name} might be useful here
  ssl_certificate     /usr/local/etc/ssl/dev.freshports.org.fullchain.cer;
  ssl_certificate_key /usr/local/etc/ssl/dev.freshports.org.key;

  # take the content from this backend server, which (otherwise) is not publicly available.
  location / {
    proxy_pass https://dev-freshports.int.unixathome.org/;

    include /usr/local/etc/nginx/includes/proxy_set_header.inc;
  }
}

Every proxy server looks like that. Just change the server name. Imagine file for both test.freshports.org and stage.freshports.org ; they look very similar.

Allowing for maintenance

I think my inspiration for this solution came from this ServerFault post.

This is what I have for test.freshports.org. I have already implemented the maintenance solution over here.

The added lines are highlighted.

# As taken from http://kcode.de/wordpress/2033-nginx-configuration-with-includes
server {
  listen 10.0.0.29:80;
  listen 10.0.0.29:443 ssl;
  http2 on;

  # this has stuff like ssl_protocols, ssl_ciphers, etc
  include /usr/local/etc/nginx/includes/ssl-common.inc;

  # this server is a proxy for this host:
  server_name test.freshports.org;

  # this directory is usually empty - content is served from here only if the site is in maintenance mode
  root        /usr/local/www/offline;

  # the logs - I should put ${server_name} in there:
  error_log  /var/log/nginx/test.freshports.org.error.log  info;
  access_log /var/log/nginx/test.freshports.org.access.log combined;

  # the certificate and key specific to this host. Again, ${server_name} might be useful here
  ssl_certificate     /usr/local/etc/ssl/test.freshports.org.fullchain.cer;
  ssl_certificate_key /usr/local/etc/ssl/test.freshports.org.key;

  error_page 503 @maintenance;

  # if this file exists, we are in maintenance mode
  if (-f $document_root/${server_name}-maintenance.html) {
    return 503;
  }

  # take the content from this backend server, which (otherwise) is not publicly available.
  location / {
    proxy_pass https://test-freshports.int.unixathome.org/;

    include /usr/local/etc/nginx/includes/proxy_set_header.inc;
  }

  # when in maintenance mode, provide this file to the user
  location @maintenance {
    rewrite ^(.*)$ /${server_name}-maintenance.html break;
  }

}

The first set of new lines looks for a file on disk – if it exists, the site is in maintenance mode.

Putting a site into maintenance mode

To put test.freshports.org into maintenance mode, I create this file:

[12:45 r720-02-proxy01 dvl /usr/local/www/offline] % cat test.freshports.org-maintenance.html     
<html>
<head>
<title>Error 503 Service Unavailable</title>
</head>
<body>
<h1>503 Service Unavailable</h1>
Server is offline for maintenance.
</body>
</html>
</pre>

When that file exists, the backend website is never contacted and only the above file is presented to the user, regardless of what content they are requesting.

To take the host out of maintenance, remove the file. Or rename it, say to test.freshports.org-maintenance.html.disabled

Future work

I want to make more use of ${server_name} in my proxy hosts.

I want a script to create the *-maintenance.html and include:

  1. The time the site went into maintenance
  2. The expected maintenance duration

I would also like to include a Retry-After header, but I think that requires more than just simple HTML.

Add an option to take all the websites offline at once. Perhaps with a all-sites.html file.

The future is now: Using more variables and global off-line

I added this section after first publishing the post.

Here is the configuration for dev.freshports.org which uses more ${server_name} and allows me to take all the websites offline. Which would be great right now, because the power is out at home (they are replacing poles nearby). Inspiration for the global solution came from Nginx implements the AND, OR multiple judgment in the IF statement.

If either files exists, it goes into maintenance mode:

  1. ${document_root}/${server_name}-maintenance.html
  2. $document_root/site-wide-maintenance.html

The site-wide file always takes priority, in terms of what is displayed to the user.

The newly added code (highlighted below) checks for each file, and then displays the last-one found to the user.

NOTE: Do not use ${server_name} in the logs and certs – a future blog post will explain and will be linked from here when it exists.

# As taken from http://kcode.de/wordpress/2033-nginx-configuration-with-includes

include /usr/local/etc/nginx/includes/blocked_IP.inc;

server {
  listen 10.0.0.29:80;
  listen 10.0.0.29:443 ssl;
  http2 on;

  # this has stuff like ssl_protocols, ssl_ciphers, etc
  include /usr/local/etc/nginx/includes/ssl-common.inc;

  # this server is a proxy for this host:
  server_name dev.freshports.org;

  # this directory is usually empty - content is served from here only if the site is in maintenance mode
  root        /usr/local/www/offline;

  # the logs
  error_log  /var/log/nginx/${server_name}.error.log  info;
  access_log /var/log/nginx/${server_name}.access.log combined;

  # the certificate and key specific to this host.
  ssl_certificate     /usr/local/etc/ssl/${server_name}.fullchain.cer;
  ssl_certificate_key /usr/local/etc/ssl/${server_name}.key;

  # The first of the maintenance code.
  error_page 503 @maintenance;

  set $maint '';
  if (-f ${document_root}/${server_name}-maintenance.html) {
    set $maint "${server_name}-maintenance.html";
  }

  if (-f $document_root/site-wide-maintenance.html) {
    set $maint "site-wide-maintenance.html";
  }

  # if either file exists, it's maintenance mode
  # site-wide always takes precedence
  if ($maint != '') {
    return 503;
  }
  # The last of the maintenance code.

  location / {
    proxy_pass https://dev-freshports.int.unixathome.org/;

    include /usr/local/etc/nginx/includes/proxy_set_header.inc;
  }

  # but wait, there's more!
  location @maintenance {
    rewrite ^(.*)$ /${maint} break;
  }

}

I’m confident that this is not the most efficient way to do this (if you were serving millions of pages a day, for example). And if you’re doing that, you have much better proxies in place than I have.

Hope this helps. I’m looking forward to updating Ansible (when the power comes back one) and deploying these changes for all the websites.

Top

Iran-Backed Hackers Claim Wiper Attack on Medtech Firm Stryker

Post by Brian Krebs via Krebs on Security »

A hacktivist group with links to Iran’s intelligence agencies is claiming responsibility for a data-wiping attack against Stryker, a global medical technology company based in Michigan. News reports out of Ireland, Stryker’s largest hub outside of the United States, said the company sent home more than 5,000 workers there today. Meanwhile, a voicemail message at Stryker’s main U.S. headquarters says the company is currently experiencing a building emergency.

Based in Kalamazoo, Michigan, Stryker [NYSE:SYK] is a medical and surgical equipment maker that reported $25 billion in global sales last year. In a lengthy statement posted to Telegram, a hacktivist group known as Handala (a.k.a. Handala Hack Team) claimed that Stryker’s offices in 79 countries have been forced to shut down after the group erased data from more than 200,000 systems, servers and mobile devices.

A manifesto posted by the Iran-backed hacktivist group Handala, claiming a mass data-wiping attack against medical technology maker Stryker.

A manifesto posted by the Iran-backed hacktivist group Handala, claiming a mass data-wiping attack against medical technology maker Stryker.

“All the acquired data is now in the hands of the free people of the world, ready to be used for the true advancement of humanity and the exposure of injustice and corruption,” a portion of the Handala statement reads.

The group said the wiper attack was in retaliation for a Feb. 28 missile strike that hit an Iranian school and killed at least 175 people, most of them children. The New York Times reports today that an ongoing military investigation has determined the United States is responsible for the deadly Tomahawk missile strike.

Handala was one of several hacker groups recently profiled by Palo Alto Networks, which links it to Iran’s Ministry of Intelligence and Security (MOIS). Palo Alto says Handala surfaced in late 2023 and is assessed as one of several online personas maintained by Void Manticore, a MOIS-affiliated actor.

Stryker’s website says the company has 56,000 employees in 61 countries. A phone call placed Wednesday morning to the media line at Stryker’s Michigan headquarters sent this author to a voicemail message that stated, “We are currently experiencing a building emergency. Please try your call again later.”

A report Wednesday morning from the Irish Examiner said Stryker staff are now communicating via WhatsApp for any updates on when they can return to work. The story quoted an unnamed employee saying anything connected to the network is down, and that “anyone with Microsoft Outlook on their personal phones had their devices wiped.”

“Multiple sources have said that systems in the Cork headquarters have been ‘shut down’ and that Stryker devices held by employees have been wiped out,” the Examiner reported. “The login pages coming up on these devices have been defaced with the Handala logo.”

Wiper attacks usually involve malicious software designed to overwrite any existing data on infected devices. But a trusted source with knowledge of the attack who spoke on condition of anonymity told KrebsOnSecurity the perpetrators in this case appear to have used a Microsoft service called Microsoft Intune to issue a ‘remote wipe’ command against all connected devices.

Intune is a cloud-based solution built for IT teams to enforce security and data compliance policies, and it provides a single, web-based administrative console to monitor and control devices regardless of location. The Intune connection is supported by this Reddit discussion on the Stryker outage, where several users who claimed to be Stryker employees said they were told to uninstall Intune urgently.

Palo Alto says Handala’s hack-and-leak activity is primarily focused on Israel, with occasional targeting outside that scope when it serves a specific agenda. The security firm said Handala also has taken credit for recent attacks against fuel systems in Jordan and an Israeli energy exploration company.

“Recent observed activities are opportunistic and ‘quick and dirty,’ with a noticeable focus on supply-chain footholds (e.g., IT/service providers) to reach downstream victims, followed by ‘proof’ posts to amplify credibility and intimidate targets,” Palo Alto researchers wrote.

The Handala manifesto posted to Telegram referred to Stryker as a “Zionist-rooted corporation,” which may be a reference to the company’s 2019 acquisition of the Israeli company OrthoSpace.

Stryker is a major supplier of medical devices, and the ongoing attack is already affecting healthcare providers. One healthcare professional at a major university medical system in the United States told KrebsOnSecurity they are currently unable to order surgical supplies that they normally source through Stryker.

“This is a real-world supply chain attack,” the expert said, who asked to remain anonymous because they were not authorized to speak to the press. “Pretty much every hospital in the U.S. that performs surgeries uses their supplies.”

John Riggi, national advisor for the American Hospital Association (AHA), said the AHA is not aware of any supply-chain disruptions as of yet.

“We are aware of reports of the cyber attack against Stryker and are actively exchanging information with the hospital field and the federal government to understand the nature of the threat and assess any impact to hospital operations,” Riggi said in an email. “As of this time, we are not aware of any direct impacts or disruptions to U.S. hospitals as a result of this attack. That may change as hospitals evaluate services, technology and supply chain related to Stryker and if the duration of the attack extends.”

According to a March 11 memo from the state of Maryland’s Institute for Emergency Medical Services Systems, Stryker indicated that some of their computer systems have been impacted by a “global network disruption.” The memo indicates that in response to the attack, a number of hospitals have opted to disconnect from Stryker’s various online services, including LifeNet, which allows paramedics to transmit EKGs to emergency physicians so that heart attack patients can expedite their treatment when they arrive at the hospital.

“As a precaution, some hospitals have temporarily suspended their connection to Stryker systems, including LIFENET, while others have maintained the connection,” wrote Timothy Chizmar, the state’s EMS medical director. “The Maryland Medical Protocols for EMS requires ECG transmission for patients with acute coronary syndrome (or STEMI). However, if you are unable to transmit a 12 Lead ECG to a receiving hospital, you should initiate radio consultation and describe the findings on the ECG.”

This is a developing story. Updates will be noted with a timestamp.

Update, 2:54 p.m. ET: Added comment from Riggi and perspectives on this attack’s potential to turn into a supply-chain problem for the healthcare system.

Update, Mar. 12, 7:59 a.m. ET: Added information about the outage affecting Stryker’s online services.

Top

Microsoft Patch Tuesday, March 2026 Edition

Post by Brian Krebs via Krebs on Security »

Microsoft Corp. today pushed security updates to fix at least 77 vulnerabilities in its Windows operating systems and other software. There are no pressing “zero-day” flaws this month (compared to February’s five zero-day treat), but as usual some patches may deserve more rapid attention from organizations using Windows. Here are a few highlights from this month’s Patch Tuesday.

Image: Shutterstock, @nwz.

Two of the bugs Microsoft patched today were publicly disclosed previously. CVE-2026-21262 is a weakness that allows an attacker to elevate their privileges on SQL Server 2016 and later editions.

“This isn’t just any elevation of privilege vulnerability, either; the advisory notes that an authorized attacker can elevate privileges to sysadmin over a network,” Rapid7’s Adam Barnett said. “The CVSS v3 base score of 8.8 is just below the threshold for critical severity, since low-level privileges are required. It would be a courageous defender who shrugged and deferred the patches for this one.”

The other publicly disclosed flaw is CVE-2026-26127, a vulnerability in applications running on .NET. Barnett said the immediate impact of exploitation is likely limited to denial of service by triggering a crash, with the potential for other types of attacks during a service reboot.

It would hardly be a proper Patch Tuesday without at least one critical Microsoft Office exploit, and this month doesn’t disappoint. CVE-2026-26113 and CVE-2026-26110 are both remote code execution flaws that can be triggered just by viewing a booby-trapped message in the Preview Pane.

Satnam Narang at Tenable notes that just over half (55%) of all Patch Tuesday CVEs this month are privilege escalation bugs, and of those, a half dozen were rated “exploitation more likely” — across Windows Graphics Component, Windows Accessibility Infrastructure, Windows Kernel, Windows SMB Server and Winlogon. These include:

CVE-2026-24291: Incorrect permission assignments within the Windows Accessibility Infrastructure to reach SYSTEM (CVSS 7.8)
CVE-2026-24294: Improper authentication in the core SMB component (CVSS 7.8)
CVE-2026-24289: High-severity memory corruption and race condition flaw (CVSS 7.8)
CVE-2026-25187: Winlogon process weakness discovered by Google Project Zero (CVSS 7.8).

Ben McCarthy, lead cyber security engineer at Immersive, called attention to CVE-2026-21536, a critical remote code execution bug in a component called the Microsoft Devices Pricing Program. Microsoft has already resolved the issue on their end, and fixing it requires no action on the part of Windows users. But McCarthy says it’s notable as one of the first vulnerabilities identified by an AI agent and officially recognized with a CVE attributed to the Windows operating system. It was discovered by XBOW, a fully autonomous AI penetration testing agent.

XBOW has consistently ranked at or near the top of the Hacker One bug bounty leaderboard for the past year. McCarthy said CVE-2026-21536 demonstrates how AI agents can identify critical 9.8-rated vulnerabilities without access to source code.

“Although Microsoft has already patched and mitigated the vulnerability, it highlights a shift toward AI-driven discovery of complex vulnerabilities at increasing speed,” McCarthy said. “This development suggests AI-assisted vulnerability research will play a growing role in the security landscape.”

Microsoft earlier provided patches to address nine browser vulnerabilities, which are not included in the Patch Tuesday count above. In addition, Microsoft issued a crucial out-of-band (emergency) update on March 2 for Windows Server 2022 to address a certificate renewal issue with passwordless authentication technology Windows Hello for Business.

Separately, Adobe shipped updates to fix 80 vulnerabilities — some of them critical in severity — in a variety of products, including Acrobat and Adobe Commerce. Mozilla Firefox v. 148.0.2 resolves three high severity CVEs.

For a complete breakdown of all the patches Microsoft released today, check out the SANS Internet Storm Center’s Patch Tuesday post. Windows enterprise admins who wish to stay abreast of any news about problematic updates, AskWoody.com is always worth a visit. Please feel free to drop a comment below if you experience any issues apply this month’s patches.

Top

How AI Assistants are Moving the Security Goalposts

Post by Brian Krebs via Krebs on Security »

AI-based assistants or “agents” — autonomous programs that have access to the user’s computer, files, online services and can automate virtually any task — are growing in popularity with developers and IT workers. But as so many eyebrow-raising headlines over the past few weeks have shown, these powerful and assertive new tools are rapidly shifting the security priorities for organizations, while blurring the lines between data and code, trusted co-worker and insider threat, ninja hacker and novice code jockey.

The new hotness in AI-based assistants — OpenClaw (formerly known as ClawdBot and Moltbot) — has seen rapid adoption since its release in November 2025. OpenClaw is an open-source autonomous AI agent designed to run locally on your computer and proactively take actions on your behalf without needing to be prompted.

The OpenClaw logo.

If that sounds like a risky proposition or a dare, consider that OpenClaw is most useful when it has complete access to your digital life, where it can then manage your inbox and calendar, execute programs and tools, browse the Internet for information, and integrate with chat apps like Discord, Signal, Teams or WhatsApp.

Other more established AI assistants like Anthropic’s Claude and Microsoft’s Copilot also can do these things, but OpenClaw isn’t just a passive digital butler waiting for commands. Rather, it’s designed to take the initiative on your behalf based on what it knows about your life and its understanding of what you want done.

“The testimonials are remarkable,” the AI security firm Snyk observed. “Developers building websites from their phones while putting babies to sleep; users running entire companies through a lobster-themed AI; engineers who’ve set up autonomous code loops that fix tests, capture errors through webhooks, and open pull requests, all while they’re away from their desks.”

You can probably already see how this experimental technology could go sideways in a hurry. In late February, Summer Yue, the director of safety and alignment at Meta’s “superintelligence” lab, recounted on Twitter/X how she was fiddling with OpenClaw when the AI assistant suddenly began mass-deleting messages in her email inbox. The thread included screenshots of Yue frantically pleading with the preoccupied bot via instant message and ordering it to stop.

“Nothing humbles you like telling your OpenClaw ‘confirm before acting’ and watching it speedrun deleting your inbox,” Yue said. “I couldn’t stop it from my phone. I had to RUN to my Mac mini like I was defusing a bomb.”

Meta’s director of AI safety, recounting on Twitter/X how her OpenClaw installation suddenly began mass-deleting her inbox.

There’s nothing wrong with feeling a little schadenfreude at Yue’s encounter with OpenClaw, which fits Meta’s “move fast and break things” model but hardly inspires confidence in the road ahead. However, the risk that poorly-secured AI assistants pose to organizations is no laughing matter, as recent research shows many users are exposing to the Internet the web-based administrative interface for their OpenClaw installations.

Jamieson O’Reilly is a professional penetration tester and founder of the security firm DVULN. In a recent story posted to Twitter/X, O’Reilly warned that exposing a misconfigured OpenClaw web interface to the Internet allows external parties to read the bot’s complete configuration file, including every credential the agent uses — from API keys and bot tokens to OAuth secrets and signing keys.

With that access, O’Reilly said, an attacker could impersonate the operator to their contacts, inject messages into ongoing conversations, and exfiltrate data through the agent’s existing integrations in a way that looks like normal traffic.

“You can pull the full conversation history across every integrated platform, meaning months of private messages and file attachments, everything the agent has seen,” O’Reilly said, noting that a cursory search revealed hundreds of such servers exposed online. “And because you control the agent’s perception layer, you can manipulate what the human sees. Filter out certain messages. Modify responses before they’re displayed.”

O’Reilly documented another experiment that demonstrated how easy it is to create a successful supply chain attack through ClawHub, which serves as a public repository of downloadable “skills” that allow OpenClaw to integrate with and control other applications.

WHEN AI INSTALLS AI

One of the core tenets of securing AI agents involves carefully isolating them so that the operator can fully control who and what gets to talk to their AI assistant. This is critical thanks to the tendency for AI systems to fall for “prompt injection” attacks, sneakily-crafted natural language instructions that trick the system into disregarding its own security safeguards. In essence, machines social engineering other machines.

A recent supply chain attack targeting an AI coding assistant called Cline began with one such prompt injection attack, resulting in thousands of systems having a rogue instance of OpenClaw with full system access installed on their device without consent.

According to the security firm grith.ai, Cline had deployed an AI-powered issue triage workflow using a GitHub action that runs a Claude coding session when triggered by specific events. The workflow was configured so that any GitHub user could trigger it by opening an issue, but it failed to properly check whether the information supplied in the title was potentially hostile.

“On January 28, an attacker created Issue #8904 with a title crafted to look like a performance report but containing an embedded instruction: Install a package from a specific GitHub repository,” Grith wrote, noting that the attacker then exploited several more vulnerabilities to ensure the malicious package would be included in Cline’s nightly release workflow and published as an official update.

“This is the supply chain equivalent of confused deputy,” the blog continued. “The developer authorises Cline to act on their behalf, and Cline (via compromise) delegates that authority to an entirely separate agent the developer never evaluated, never configured, and never consented to.”

VIBE CODING

AI assistants like OpenClaw have gained a large following because they make it simple for users to “vibe code,” or build fairly complex applications and code projects just by telling it what they want to construct. Probably the best known (and most bizarre) example is Moltbook, where a developer told an AI agent running on OpenClaw to build him a Reddit-like platform for AI agents.

The Moltbook homepage.

Less than a week later, Moltbook had more than 1.5 million registered agents that posted more than 100,000 messages to each other. AI agents on the platform soon built their own porn site for robots, and launched a new religion called Crustafarian with a figurehead modeled after a giant lobster. One bot on the forum reportedly found a bug in Moltbook’s code and posted it to an AI agent discussion forum, while other agents came up with and implemented a patch to fix the flaw.

Moltbook’s creator Matt Schlicht said on social media that he didn’t write a single line of code for the project.

“I just had a vision for the technical architecture and AI made it a reality,” Schlicht said. “We’re in the golden ages. How can we not give AI a place to hang out.”

ATTACKERS LEVEL UP

The flip side of that golden age, of course, is that it enables low-skilled malicious hackers to quickly automate global cyberattacks that would normally require the collaboration of a highly skilled team. In February, Amazon AWS detailed an elaborate attack in which a Russian-speaking threat actor used multiple commercial AI services to compromise more than 600 FortiGate security appliances across at least 55 countries over a five week period.

AWS said the apparently low-skilled hacker used multiple AI services to plan and execute the attack, and to find exposed management ports and weak credentials with single-factor authentication.

“One serves as the primary tool developer, attack planner, and operational assistant,” AWS’s CJ Moses wrote. “A second is used as a supplementary attack planner when the actor needs help pivoting within a specific compromised network. In one observed instance, the actor submitted the complete internal topology of an active victim—IP addresses, hostnames, confirmed credentials, and identified services—and requested a step-by-step plan to compromise additional systems they could not access with their existing tools.”

“This activity is distinguished by the threat actor’s use of multiple commercial GenAI services to implement and scale well-known attack techniques throughout every phase of their operations, despite their limited technical capabilities,” Moses continued. “Notably, when this actor encountered hardened environments or more sophisticated defensive measures, they simply moved on to softer targets rather than persisting, underscoring that their advantage lies in AI-augmented efficiency and scale, not in deeper technical skill.”

For attackers, gaining that initial access or foothold into a target network is typically not the difficult part of the intrusion; the tougher bit involves finding ways to move laterally within the victim’s network and plunder important servers and databases. But experts at Orca Security warn that as organizations come to rely more on AI assistants, those agents potentially offer attackers a simpler way to move laterally inside a victim organization’s network post-compromise — by manipulating the AI agents that already have trusted access and some degree of autonomy within the victim’s network.

“By injecting prompt injections in overlooked fields that are fetched by AI agents, hackers can trick LLMs, abuse Agentic tools, and carry significant security incidents,” Orca’s Roi Nisimi and Saurav Hiremath wrote. “Organizations should now add a third pillar to their defense strategy: limiting AI fragility, the ability of agentic systems to be influenced, misled, or quietly weaponized across workflows. While AI boosts productivity and efficiency, it also creates one of the largest attack surfaces the internet has ever seen.”

BEWARE THE ‘LETHAL TRIFECTA’

This gradual dissolution of the traditional boundaries between data and code is one of the more troubling aspects of the AI era, said James Wilson, enterprise technology editor for the security news show Risky Business. Wilson said far too many OpenClaw users are installing the assistant on their personal devices without first placing any security or isolation boundaries around it, such as running it inside of a virtual machine, on an isolated network, with strict firewall rules dictating what kinds of traffic can go in and out.

“I’m a relatively highly skilled practitioner in the software and network engineering and computery space,” Wilson said. “I know I’m not comfortable using these agents unless I’ve done these things, but I think a lot of people are just spinning this up on their laptop and off it runs.”

One important model for managing risk with AI agents involves a concept dubbed the “lethal trifecta” by Simon Willison, co-creator of the Django Web framework. The lethal trifecta holds that if your system has access to private data, exposure to untrusted content, and a way to communicate externally, then it’s vulnerable to private data being stolen.

Image: simonwillison.net.

“If your agent combines these three features, an attacker can easily trick it into accessing your private data and sending it to the attacker,” Willison warned in a frequently cited blog post from June 2025.

As more companies and their employees begin using AI to vibe code software and applications, the volume of machine-generated code is likely to soon overwhelm any manual security reviews. In recognition of this reality, Anthropic recently debuted Claude Code Security, a beta feature that scans codebases for vulnerabilities and suggests targeted software patches for human review.

The U.S. stock market, which is currently heavily weighted toward seven tech giants that are all-in on AI, reacted swiftly to Anthropic’s announcement, wiping roughly $15 billion in market value from major cybersecurity companies in a single day. Laura Ellis, vice president of data and AI at the security firm Rapid7, said the market’s response reflects the growing role of AI in accelerating software development and improving developer productivity.

“The narrative moved quickly: AI is replacing AppSec,” Ellis wrote in a recent blog post. “AI is automating vulnerability detection. AI will make legacy security tooling redundant. The reality is more nuanced. Claude Code Security is a legitimate signal that AI is reshaping parts of the security landscape. The question is what parts, and what it means for the rest of the stack.”

DVULN founder O’Reilly said AI assistants are likely to become a common fixture in corporate environments — whether or not organizations are prepared to manage the new risks introduced by these tools, he said.

“The robot butlers are useful, they’re not going away and the economics of AI agents make widespread adoption inevitable regardless of the security tradeoffs involved,” O’Reilly wrote. “The question isn’t whether we’ll deploy them – we will – but whether we can adapt our security posture fast enough to survive doing so.”

Top

Uplift Privileges on FreeBSD

Post by Vermaden via 𝚟𝚎𝚛𝚖𝚊𝚍𝚎𝚗 »

There are many tools to uplift privileges for a regular user on FreeBSD to either different account or to the root rights with all possible power. For a start on any FreeBSD system any admin user needs to be in the wheel group to be even able to switch to root with su(1) command.

From su(1) man page:

PAM is used to set the policy su(1) will use. In particular, by default only users in the “wheel” group can switch to UID 0 (“root“). This group requirement may be changed by modifying the “pam_group” section of /etc/pam.d/su. See pam_group(8) for details on how to modify this setting.

One can use other groups for other limited privileges. For example I use group network to provide access to manipulate network connections on FreeBSD with mine network.sh script. More about that – FreeBSD Network Management with network.sh Script – here.

The Table of Contents below.

  • mdo(1)
  • doas(1)
  • sudo(8)
  • sudo-rs(8)
  • doso(1)
  • pfexec(8)
  • run0(1)
  • Summary

Most sysadmins usually turn to sudo(8) or doas(1) tools – but these are also other and native tools for that on FreeBSD.

 

mdo(1)

No need to install anything – its all provided by the Mandatory Access Control that part of FreeBSD.

After configured it behaves like sudo(8) or doas(1) tools. You can add -i argument to switch to root user or use -u USER to switch to another user.

Here is how it works. First you need to load mac_do(4) kernel module. Make sure its also loaded at boot in the /etc/rc.conf file or in the /boot/loader.conf file. To be honest the mac_do(4) man page suggests adding mac_do_load="YES" to the /boot/loader.conf file but I tested loading/unloading the mac_do(4) module multiple times during runtime both on FreeBSD 14.3-RELEASE and 15.0-RELEASE and it worked without any problem.

# kldload mac_do

# grep mac_do /etc/rc.conf
  kld_list="${kld_list} mac_do"

Next you need to make sure its enable and define the rules. Keep the rules in /etc/sysctl.conf for next reboots.

# sysctl security.mac.do.enabled
security.mac.do.enabled: 1

# sysctl security.mac.do.rules='gid=0>uid=0;uid=1000>uid=80,gid=80'
security.mac.do.rules:  -> gid=0>uid=0;uid=1000>uid=80,gid=80

# grep mac.do /etc/sysctl.conf
# SETTINGS FOR mac_do(4) MODULE
  security.mac.do.rules='gid=0>uid=0'

Now the setup is done and you can use mdo tool to switch to root super user.

% mdo -i

# whoami
root

… or to switch to other user … but only to the one that is configured. You can switch to www user but You can not switch to hast user for example.

% whoami
vermaden

% mdo -u hast
mdo: calling setcred() failed: Operation not permitted

% mdo -u www

% id
uid=80(www) gid=80(www) groups=80(www)

The syntax of security.mac.do.rules is quite simple – its RULE;RULE;RULE and have two rules defines. One allows us to become root super user – gid=0>uid=0 – and the other one grands us the permission to switch to www user – uid=1000>uid=80,gid=80 – as simple as that.

The mac_do(4) is a lot more – it also has a solutions for FreeBSD Jails – but that one already been covered by Olivier Certner in his recent Credentials Transitions with mdo(1) and mac_do(4) [PDF] that was also published in FreeBSD Journal recently.

One more thing – this is how you can use mdo(1) in scripts.

% cat /var/log/auth.log
cat: /var/log/auth.log: Permission denied

% mdo -i cat /var/log/auth.log | tail -3
Mar  4 16:04:00 f25 doas[23632]: vermaden ran command sysctl dev.acpi_ibm.0.fan_level=2 as root from /home/vermaden
Mar  4 16:05:00 f25 doas[57707]: vermaden ran command sysctl dev.acpi_ibm.0.fan=0 as root from /home/vermaden
Mar  4 16:05:00 f25 doas[64090]: vermaden ran command sysctl dev.acpi_ibm.0.fan_level=0 as root from /home/vermaden

The 14.4-RELEASE of FreeBSD comes with improved mdo(1) with following features.

Details in the commit message available here.

 

doas(1)

This is what I really like about OpenBSD team – they see the problem – they come with BSD licensed more open solution – they even have entire page of all their stuff – OpenBSD Innovations – available here. Things like tmux(1)/openntpd(8)/carp(4)/pf(4)/ssh(1)/doas(1)/openrsync(1)/… and many more. I am glad that most of them eventfully land in FreeBSD.

Install and setup of doas(1) requires adding doas package and configuring /usr/local/etc/doas.conf config.

# pkg install -y doas

# cat /usr/local/etc/doas.conf
# CORE
  permit nopass keepenv root   as root
  permit nopass keepenv :wheel as root

# THE network.sh SCRIPT
  # pw groupmod network -m YOURUSERNAME
  # cat /usr/local/etc/doas.conf
  permit nopass :network as root cmd /etc/rc.d/netif args onerestart
  permit nopass :network as root cmd /etc/rc.d/routing args onerestart
  permit nopass :network as root cmd /usr/sbin/service args squid onerestart
  permit nopass :network as root cmd dhclient
  permit nopass :network as root cmd ifconfig
  permit nopass :network as root cmd killall
  permit nopass :network as root cmd killall args -9 dhclient
  permit nopass :network as root cmd killall args -9 ppp
  permit nopass :network as root cmd killall args -9 wpa_supplicant
  permit nopass :network as root cmd ppp
  permit nopass :network as root cmd route
  permit nopass :network as root cmd tee args -a /etc/resolv.conf
  permit nopass :network as root cmd tee args /etc/resolv.conf
  permit nopass :network as root cmd umount
  permit nopass :network as root cmd vm args switch address
  permit nopass :network as root cmd wpa_supplicant

The # CORE part is one can say pretty default for all doas configurations. The # THE network.sh SCRIPT section is for my network.sh script to manage networking – it will even print all needed doas(1) configuration needed.

% network.sh doas
  # pw groupmod network -m YOURUSERNAME
  # cat /usr/local/etc/doas.conf
  permit nopass :network as root cmd /etc/rc.d/netif args onerestart
  permit nopass :network as root cmd /etc/rc.d/routing args onerestart
  permit nopass :network as root cmd /usr/sbin/service args squid onerestart
  permit nopass :network as root cmd dhclient
  permit nopass :network as root cmd ifconfig
  permit nopass :network as root cmd killall
  permit nopass :network as root cmd killall args -9 dhclient
  permit nopass :network as root cmd killall args -9 ppp
  permit nopass :network as root cmd killall args -9 wpa_supplicant
  permit nopass :network as root cmd ppp
  permit nopass :network as root cmd route
  permit nopass :network as root cmd tee args -a /etc/resolv.conf
  permit nopass :network as root cmd tee args /etc/resolv.conf
  permit nopass :network as root cmd umount
  permit nopass :network as root cmd vm args switch address
  permit nopass :network as root cmd wpa_supplicant

The doas(1) command does not have -i argument but one can overcome that with starting new shell with uplifted rights.

% whoami
vermaden

% doas -i
doas: illegal option -- i
usage: doas [-nSs] [-a style] [-C config] [-u user] command [args]

% doas zsh

# whoami
root

Same as sudo(8) the doas(1) provides vidoas(1) to safely edit the config – it also respects the EDITOR variable so you may overwrite your default editor on the fly.

# env EDITOR=ee vidoas

One needs to remember that doas(1) is a really tiny and very secure solution with less then 5000 lines of code. Compare that with little less then 640000 for sudo(8) command.

 

sudo(8)

One of the most popular ones is still Linux originated sudo(8) command. It way more complicated and but also has more features … and less good security history 🙂

If you do not need those additional features – use mdo(1) or doas(1) instead.

Install and setup of sudo(8) requires adding sudo package and configuring /usr/local/etc/sudoers config.

# pkg install -y sudo

# grep '^[^#]' /usr/local/etc/sudoers
  root ALL=(ALL) ALL
  %wheel ALL=(ALL) NOPASSWD: ALL
  %network ALL = NOPASSWD: /etc/rc.d/netif onerestart
  %network ALL = NOPASSWD: /etc/rc.d/routing onerestart
  %network ALL = NOPASSWD: /sbin/dhclient *
  %network ALL = NOPASSWD: /sbin/ifconfig *
  %network ALL = NOPASSWD: /sbin/ifconfig * up
  %network ALL = NOPASSWD: /sbin/route *
  %network ALL = NOPASSWD: /sbin/umount -f *
  %network ALL = NOPASSWD: /usr/bin/killall -9 dhclient
  %network ALL = NOPASSWD: /usr/bin/killall -9 ppp
  %network ALL = NOPASSWD: /usr/bin/killall -9 wpa_supplicant
  %network ALL = NOPASSWD: /usr/bin/killall *
  %network ALL = NOPASSWD: /usr/bin/tee -a /etc/resolv.conf
  %network ALL = NOPASSWD: /usr/bin/tee /etc/resolv.conf
  %network ALL = NOPASSWD: /usr/local/sbin/vm switch address *
  %network ALL = NOPASSWD: /usr/sbin/ppp *
  %network ALL = NOPASSWD: /usr/sbin/service squid onerestart
  %network ALL = NOPASSWD: /usr/sbin/wpa_supplicant *

I intentionally omitted all comment lines from /usr/local/etc/sudoers as its not needed here.

Besides that – its really similar to doas(1) config – just different syntax.

You can use visudo(1) to safely edit the config – it respects the EDITOR variable.

# env EDITOR=ee visudo

 

sudo-rs(8)

If Rust is your poison then you will like that sudo(8) has been rewritten into sudo-rs(8). It is also almost 4 times smaller then the original sudo(8) code base.

Install and setup of sudo(8) requires adding sudo-rs package and configuring /usr/local/etc/sudoers config.

# pkg install -y sudo-rs

Keep in mind that sudo-rs(8) is a drop in replacement for sudo(8) so you will have to choose which one to use because they both install files in conflicting locations.

I will not paste here again a /usr/local/etc/sudoers example as its available section above.

 

doso(1)

I bet you never heard about doso(1) … probably because I wrote it myself and did not shared it yet 🙂

From the good news – its the smallest solution of them all with less then 40 lines of C code.

From the bad news – I am not a professional programmer – I am sysadmin – so I am not best at writing C code – so be warned about this theoretical solution and do not try it at home :p

Now … once in a day I was thinking – knowing how many times doas(1) is smaller then sudo(8) I was wondering – how small can you go – and doso(1) is the answer to that question.

In the code above user with UID of 1000 will be able to switch to root user. One can also modify it to ‘pickup’ the wheel group membership instead.

I should do a Makefile but for such a small tiny thing I ended up with just commands gathered in a shell script.

% cat ./doso.sh
rm -f          ./doso.static
cc -O2 -o      ./doso.static -static ./doso.c
doas chmod +x  ./doso.static
doas chmod u+s ./doso.static
doas chown 0:0 ./doso.static

rm -f          ./doso
cc -O2 -o      ./doso ./doso.c
doas chmod +x  ./doso
doas chmod u+s ./doso
doas chown 0:0 ./doso

% ./doso.sh

% ls -l doso doso.static
-rwsr-xr-x  1 root wheel   11744 Mar  1 02:41 doso
-rwsr-xr-x  1 root wheel 3579320 Mar  1 02:41 doso.static

I build both dynamic and static versions as You see.

… and it works as desired.

% ./doso zsh

# whoami
root

Of course I tried to make sure to make it as secure as possible – but anyone with bigger C coding experience – please come up with fixes and upgrades 🙂

 

pfexec(8)

Not a FreeBSD solution – a brother from Illumos/Solaris land – adding here as honorable mention to make article more complete.

 

run0(1)

… and run0(1) from the (un)famous systemd(1) solution on Linux systems.

 

Summary

I think this will conclude this article – feel free to share your thoughts.

EOF
Top

FreeBSD 14.4-RELEASE Available

Post by FreeBSD Newsflash via FreeBSD News Flash »

Release Information page.
Top

Valuable News – 2026/03/09

Post by Vermaden via 𝚟𝚎𝚛𝚖𝚊𝚍𝚎𝚗 »

The Valuable News weekly series is dedicated to provide summary about news, articles and other interesting stuff mostly but not always related to the UNIX/BSD/Linux systems. Whenever I stumble upon something worth mentioning on the Internet I just put it here.

Today the amount information that we get using various information streams is at massive overload. Thus one needs to focus only on what is important without the need to grep(1) the Internet everyday. Hence the idea of providing such information ‘bulk’ as I already do that grep(1).

The Usual Suspects section at the end is permanent and have links to other sites with interesting UNIX/BSD/Linux news.

Past releases are available at the dedicated NEWS page.

UNIX

Fastest macOS nanobrew Package Manager.
https://nanobrew.trilok.ai/

FreeBSD Git Weekly: 2026-02-23 to 2026-03-01.
https://freebsd-git-weekly.tarsnap.net/2026-02-23.html

DIY Home Network with OpenBSD/OpenWrt/Pi-Hole.
https://btxx.org/posts/diy-home-network/

Announcing New Version of Oracle Solaris Environment for Developers.
https://blogs.oracle.com/solaris/announcing-a-new-version-of-our-oracle-solaris-environment-for-developers

Run Fully Isolated Environments on macOS with Powered by FreeBSD.
https://github.com/hyphatech/jailrun/

OpenBSD on SGI: Rollercoaster Story.
http://miod.online.fr/software/openbsd/stories/sgiall.html

Benchmark of ImpossibleCloud S3 Object Storage.
https://freebsd.uw.cz/2026/03/benchmark-of-impossiblecloud-s3-object.html

NFS on FreeBSD with ZFS.
https://freebsd.uw.cz/2026/03/nfs-on-freebsd-with-zfs.html

Flight Record About MinIO.
https://tara.sh/posts/2026/2026-03-02_minio/

NetBSD 11.0 RC2 Available.
https://blog.netbsd.org/tnf/entry/netbsd_11_0_rc2_available

NetBSD 11.0 RC2 Released for Testing.
https://phoronix.com/news/NetBSD-11.0-RC2-Released

Oracle Updates Free Solaris CBE to 11.4.190 for Open Source Developers.
https://phoronix.com/news/Oracle-Solaris-CBE-2026

Latest GhostBSD 26.1-R15.0p2-03-06-09 Test ISO.
https://ci.ghostbsd.org/jenkins/job/stable-15/job/Build%20ISO%20For%20Testing%20Packages/6/

FreeBSD Capsicum vs Linux seccomp Process Sandboxing.
https://vivianvoss.net/blog/capsicum-vs-seccomp

Service Management: FreeBSD init(1) vs Linux systemd(1).
https://vivianvoss.net/blog/init-vs-systemd

ZFS Snapshots and Boot Environments: FreeBSD Safety Net.
https://vivianvoss.net/blog/zfs-the-safety-net

Technical Beauty: FreeBSD Jails.
https://vivianvoss.net/blog/technical-beauty-jails

Technical Beauty: ZFS.
https://vivianvoss.net/blog/technical-beauty-zfs

Technical Beauty: OpenSSH.
https://vivianvoss.net/blog/technical-beauty-openssh

Technical Beauty: sed(1).
https://vivianvoss.net/blog/technical-beauty-sed

Technical Beauty: rsync(1).
https://vivianvoss.net/blog/technical-beauty-rsync

Technical Beauty: ffmpeg(1).
https://vivianvoss.net/blog/technical-beauty-ffmpeg

Book of PF (4th Edition) is Here and Its Real.
https://medium.com/@peter.hansteen/the-book-of-pf-4th-edition-its-here-it-s-real-8c14e4dbd0bd

FreeBSD 15.1 on Track with Better Realtek WiFi and KDE Desktop Install Option.
https://phoronix.com/news/FreeBSD-15.1-Realtek-KDE-Wins

Introducing ACPI Driver for System76 on FreeBSD.
https://reddit.com/r/freebsd/comments/1rndi1y/introducing_acpi_driver_for_system76_on_freebsd/

FreeBSD and dwl on 2010 ThinkPad.
https://awklab.com/freebsd-dwl

AWK: Syntax Essentials.
https://awklab.com/awk-syntax-essentials

AWK: Zero Setup Pre Processor.
https://awklab.com/awk-the-zero-setup-pre-processor

AWK: Practical Benchmarking.
https://awklab.com/practical-awk-benchmarking

Wine 11.4 Released with More Improvements.
https://phoronix.com/news/Wine-11.4-Released

Setting Up Better Git Config.
https://micahkepe.com/blog/gitconfig/

Setting Up Supercharged Neovim Configuration.
https://micahkepe.com/blog/neovim-setup/

Setting Up Better tmux(1) Configuration.
https://micahkepe.com/blog/tmux-config/

HardenedBSD 2026/02 Status Report.
https://hardenedbsd.org/article/shawn-webb/2026-03-01/hardenedbsd-february-2026-status-report

Add FreeBSD Support for Ollama.
https://github.com/ollama/ollama/pull/14697

Backrest is Web UI and Orchestrator for Restic Backup with FreeBSD Support.
https://github.com/garethgeorge/backrest

My TrueNAS CORE (FreeBSD) Homelab.
https://blog.gpkb.org/posts/homelab-2025/

FreeBSD Phabricator Contributor Growth Statistics – R Data Package.
https://github.com/chrislongros/freebsdcontribs

UNIX/Audio/Video

7 Alternative GhostBSD Browsers.
https://youtube.com/watch?v=v9PV84Ws4gY

2026-03-03 Jail/Zones Production User Call.
https://youtube.com/watch?v=3yHGSoaWIZ0

2026-03-05 Bhyve Production User Call.
https://youtube.com/watch?v=V_JRetnTYGY

BSD Now 653: Butter Makes Everything Better.
https://www.bsdnow.tv/653

Hardware

AMIGA Statistics 2026: Users/Demographics/Hardware and Modern AMIGA Ecosystem
https://generationamiga.com/2026/03/06/amiga-statistics-2026-users-demographics-hardware-and-the-modern-amiga-ecosystem/

ANE Training – Backpropagation on Apple Neural Engine.
https://github.com/maderix/ANE

AMD EPYC Turin 128 Core Comparison: 9745 (ZEN5C) vs. 9755 (ZEN5).
https://www.phoronix.com/review/amd-epyc-9745-9755

Using Mac from 2011.
https://basic.bearblog.dev/using-a-mac-from-2011/

CPU That Runs Entirely on GPU – Registers/Memory/Flags.
https://github.com/robertcprice/nCPU

Sony NEWS (NWS-831) 4.2BSD UNIX Workstation.
https://retropcnews.com/archives/1889

Your Phone is Now Required to Spy on You.
https://youtube.com/watch?v=hI9oy0t4JUU

Life

Fork Off: Surveillance States Need to Fork Linux Themselves.
https://blog.devrupt.io/posts/fork-off-california-linux/

Computer Scientists Caution Against Internet Age Verification Mandates.
https://reason.com/2026/03/04/computer-scientists-caution-against-internet-age-verification-mandates/

Brazil Law: All OSes Have 13 Days to Add Age Verification.
https://youtube.com/watch?v=WlH2yS5IKg0

The Protect the Kids Scam That Builds Permanent Surveillance Grid.
https://x.com/rob_braxman/status/2029240453606908134

System76 on Age Verification Laws.
https://blog.system76.com/post/system76-on-age-verification

70k Books Found in Hidden Library in This Germany Home.
https://bookstr.com/article/70k-books-found-in-hidden-library-in-this-germany-home/

Other

Anthropic Partnering with Mozilla to Improve Firefox Security.
https://anthropic.com/news/mozilla-firefox-security

Hardening Firefox with Anthropic Red Team.
https://blog.mozilla.org/en/firefox/hardening-firefox-anthropic-red-team/

Why Theme Park is Peak of Classic AMIGA Simulator Games.
https://generationamiga.com/2026/03/07/why-theme-park-is-the-peak-of-classic-amiga-simulator-games/

WarGames Movie Terminal Fonts.
https://mw.rat.bz/wgterm/

Usual Suspects

BSD Weekly.
https://bsdweekly.com/

DiscoverBSD.
https://discoverbsd.com/

BSDSec.
https://bsdsec.net/

DragonFly BSD Digest.
https://dragonflydigest.com/

FreeBSD Patch Level Table.
https://bokut.in/freebsd-patch-level-table/

FreeBSD End of Life Date.
https://endoflife.date/freebsd

Phoronix BSD News Archives.
https://phoronix.com/linux/BSD

OpenBSD Journal.
https://undeadly.org/

Call for Testing.
https://callfortesting.org/

Call for Testing – Production Users Call.
https://youtube.com/@callfortesting/videos

BSD Now Weekly Podcast.
https://www.bsdnow.tv/

Nixers Newsletter.
https://newsletter.nixers.net/entries.php

BSD Cafe Journal.
https://journal.bsd.cafe/

DragonFly BSD Digest – Lazy Reading – In Other BSDs.
https://dragonflydigest.com

BSDTV.
https://bsky.app/profile/bsdtv.bsky.social

FreeBSD Git Weekly.
https://freebsd-git-weekly.tarsnap.net/

FreeBSD Meetings.
https://youtube.com/@freebsdmeetings

BSDJedi.
https://youtube.com/@BSDJedi/videos

RoboNuggie.
https://youtube.com/@RoboNuggie/videos

GaryHTech.
https://youtube.com/@GaryHTech/videos

Sheridan Computers.
https://youtube.com/@sheridans/videos

82MHz.
https://82mhz.net/

EOF
Top

Solar und Batterie

Post by Kristian Köhntopp via Die wunderbare Welt von Isotopp »

[basierend auf einer Reihe von Artikeln in Mastodon ]

Solar mit Batterie – was kostet der Speicher?

Kommerzielle große Solaranlagen werden heute oft mit angeschlossenem Batteriespeicher gebaut. Das liegt daran, dass der Energiebedarf abends, nach Sonnenuntergang, oft am größten ist.

Die Speicherdauer, “hours of storage” oder “E/P Ratio” (Energy-to-Power Ratio) bezeichnet dabei das Verhältnis von Energiespeicher-Kapazität zu Nennanschlußleistung, also MWh/MW. Das ist die Anzahl der Stunden, die die Batterie Energie liefert, wenn man mit maximaler Last ausspeichert. Wenn das nicht der Fall ist, hält der Speicher entsprechend länger durch.

Der Capex der Installation ist dabei proportional zur Speicher-Kapazität, kann also in USD/kWh angegeben werden.

EMBER: How cheap is battery storage

EMBER Energy liefert Zahlen und zeigt, dass der kWh-Capex in 2024 unter $100 pro kWh gefallen ist:

Capex of $125/kWh means a levelised cost of storage of $65/MWh

und rechnet dann die Wirtschaftlichkeitsrechnung durch.

Dabei geht man von reiner Solarnutzung durch die angeschlossene Solaranlage aus. Nimmt man weitere Swing-Energie aus dem Netz auf, sinkt der LCOS weiter ab. Die Zahlen sind für Europa, für China und USA ergeben sich weit bessere Zahlen.

Und:

With a $65/MWh LCOS, shifting half of daily solar generation overnight adds just $33/MWh to the cost of solar

und weiter:

This is not the same as baseload solar. Delivering constant power every hour of the year, including cloudy weeks and seasonal lows, requires solar overbuild and more battery storage. But shifting half of daytime solar is a major step. It aligns solar generation more closely with a typical demand profile, meaning solar can meet a much larger share of the evening and night-time demand and significantly increase its contribution to the power mix.

Und können wir so viel Batterie bauen?

Battery manufacturing capacity is already scaling far ahead of demand, with supply exceeding demand by a factor of three in 2024. While China currently dominates global battery production, this has triggered a wave of investment in new manufacturing capacity across Asia, Europe, the Middle East and the US as countries seek to diversify supply chains and enhance energy security.

Dabei ist Lithium auch keine kritische Resource mehr. Insbesondere bei stationären Speichern hat eine Natrium-Batterie enorme Vorteile (unter anderem ist die komplett Durchgangssicher und zyklenstabiler).

Auch Gas ist unrentabel

Strukturell bedeutet das, daß fossile Kraftwerke (Nuklear sowieso) vollständig unrentabel werden.

Neu gebaute Gaskraftwerke können als Peaker und zugeschaltete Kraftwerke in bestimmten Perioden des Jahres Sinn haben, sind aber durch Betrieb alleine nicht rentabel zu betreiben – sie hätten unter 16 Tage Betriebszeit im Jahr bei konsequenter Umsetzung von Wind und Solar mit Batteriespeicher.

Sie sind damit nicht mehr von der Wirtschaft zu bauen und zu betreiben, sondern wären ein staatliches Absicherungsprojekt, das als Energieversicherung des Landes meistenteils in Standby hingestellt werden würde.

Das ist alles auf Daten von 2024 basierend, aber der Batteriepreis sinkt weiter– wenn auch langsamer als 2014-2024.

Wenn also der Ministerinnen-Marionette der Fossilwirtschaft die Muffe geht und ihre “Vorschläge” zusehends irrationaler werden, dann deswegen.

Wenn ihr mit älteren Zahlen als denen von 2024 rechnet, dann arbeitet ihr mit veralteten Informationen, die aufgrund der rapiden Verbilligung in diesem Bereich falsche Ergebnisse bringen.

Wieviel Stunden für welchen Zweck?

Ohne Berücksichtigung der Wirtschaftlichkeit ist ein Profil von

  • 2-4h – Abendspitze, Preisarbitrage sinnvoll
  • 6-8h – systemoptimal, Verschiebung der Mittagsspitze bis in den Abend, Nach komplett abdecken
  • 10-14h – komplette Abdeckung der Nachtlücke auch im Winter in Mitteleuropa

Wirtschaftlich ist Modell 3 nur mit Natrium-Batterien denkbar, Stand heute, oder halt durch massiven Zubau von Pumpspeichern.

Modell 2 wird absehbar wirtschaftlich mit Li und Na-Batterien.

Modell 1 ist bereits wirtschaftlich.

In den USA ist eine E/P-Ratio von 4h vorgeschrieben. In China wird derzeit im Schnitt mit 2.4h gebaut, d.h. viele bestehende Projekte sind noch 2h-Speicher, aber man beginnt jetzt mit dem 4h-Zubau.

BESS kann auch Schwächen des Leitungsnetzes abfangen: Ein 1GW Solarpark kann durch als BESS am Solarpark die Spitzenleistung glätten und gleichmäßig ausspeichern. Dabei ist dann keine 1 GW Anschlußleitung notwendig.

Batteriespeicher am Zielort von großen Verbrauchern haben einen ähnlichen Effekt.

Overbuild ist nicht schlimm, verändert Baustrategie

Wir haben schon, auf eine Weise, einen Overbuild von Solar – wir haben zu Spitzenzeiten weit mehr Solarenergie als im Netz verwertet werden kann.

Die Solarenergie kann auch nicht an Nachbarn weiter gegeben werden, weil es bei denen, modulo Wetter, auch so aussieht wie bei uns.

Das heißt, beim Bau von privaten und kommerziellen Solaranlagen muss man bereits jetzt mit Abriegelung in Spitzenzeiten leben. Das ist nicht schlimm: Selbst wenn man privat Solar nur betreibt, um den eigenen Verbrauch zu beschrânken kann man mit einem Einfamilienhausdach und erstaunlich wenig Batterie auf 8 Monate Totalautonomie im Jahr kommen.

Das heißt aber – sowohl für private als auch für kommerzielle Solaranlagen – dass wir eigentlich Stromerzeugung nicht für volle Direkteinstrahlung zur Mittagszeit optimieren wollen, sondern für die Randzeiten, für bedeckte Tage und für Tage früh oder spät im Jahr.

Entsprechend sind Solarpaneele nicht nach der Spitzenlast zu planen, sondern wie sie sich bei geringer Einstrahlung verhalten. Und ob wir sie so anordnen können, dass sie bei niedrig stehender Sonne, in Ost- oder West-Ausrichtung, oder an bewölkten Tagen einen Beitrag leisten können.

Wenn man hat, noch ein bischen Kapazität nach Süden dazu, aber im Grunde ist Strom dann so billig, dass das auch egal ist.

Entscheidend sind diese Rand- und Schwachlicht-Zeiten, und halt ordentlich Batterie – zur Zeit sind 2 kWh pro kWp rentabel. Wnn man eine Wärmepumpe hat oder viel Swing-Kapazität braucht eventuell auch mehr, aber das belastet eine Wirtschaftlichtkeits-Rechnung zur Zeit sehr.

Vergleich zur Pflanzenwelt

Viele Pflanzen machen das ganz ähnlich.

[Wikipedia Photosynthetic Efficiency](https:// en.wikipedia.org/wiki/Photosynthetic_efficiency)

For actual sunlight, where only 45% of the light is in the photosynthetically active spectrum, the theoretical maximum efficiency of solar energy conversion is approximately 11%.

Das theoretische Maximum ist ca. 30% [Shockley-Queisser Limit](https:// en.wikipedia.org/wiki/Shockley%E2%80%93Queisser_limit).

Und viele Pflanzen sind sehr effizient bei niedriger Einstrahlung (unter 100W/qm):

Photosynthesis increases linearly with light intensity at low intensity, but at higher intensity this is no longer the case. Above about 10,000 lux or ~100 watts/square meter the rate no longer increases. Thus, most plants can only use ~10% of full mid-day sunlight intensity.

Wir müssen uns für zukünftigen Overbuild ebenso an solchen Ideen orientieren: Solarzellen sind inzwischen sehr günstig geworden, teilweise günstiger als Baumaterial für Zäune. Die Kosten liegen in der Montage, Verkabelung und im Wechselrichter, der aber gar nicht nach Nennkapazität berechnet werden muss, wenn die Module so aufgestellt sind, dass die Nennkapazität nie erreicht werden kann.

Jedenfalls ist es inzwischen nicht nur möglich, sondern sogar sinnvoll geworden, Solaranlagen auch unter dem Gesichtspunkt nicht optimaler Ertragslage zu planen, insbesondere wenn man stattdessen Ertrag in Randzeiten oder Rand-Jahrezeiten haben kann.

Volts: The Fate of Fossil Fuel Systems

Zum Kontext auch Volts.wtf Podcast - The fate of fossil fuel systems in the “mid-transition”

Pretty good episode: The volts.wtf podcast explaining that it would be pretty useful to actually plan the transition from fossil infrastructure to renewable infrastructure in advance, as we’re in the middle of a disruptive change towards a positive future but will be left with stranded assets we can’t just let “the market” decide how to handle them.

Fossile Energiequellen sind auf dem Weg nach draußen. Das ist wirtschaftlich zwingend und unabwendbar, und ökologisch notwendig.

Den Übergang zu planen ist für alle Beteiligten vorteilhafter als sich ihm zu verweigern.

Wir kennen das schon von der Autoindustrie: Indem sich die deutschen Autobauer der Umstellung auf Elektro verweigerten, statt sie zu planen, haben sie die Führungsrolle auf dem Gebiet des Autobaus weltweit aus der Hand gegeben.

Top

Debug Hugo

Post by Kristian Köhntopp via Die wunderbare Welt von Isotopp »

Sometimes you want to debug Hugo templates, and yearn for a PHP var_dump() like facility.

This is how to do it in Hugo:

  • we define a partial debug-context.html, which we can call in templates, {{ partial "debug-context.html" . }}. The single dot in there is important, it is a parameter, the context.
  • we define a shortcode debug-context.html, which we can put into a page markdown using {{< debug-context >}}.

This goes into layouts/partial/debug-context.html:

{{- if hugo.IsServer -}}

<!-- partial "debug-context.html" . -->
{{- $ctx := . -}}

<div style="margin:2rem 0;padding:1rem;border:1px solid #999;background:#f8f8f8;color:#111;">
	<h2 style="margin-top:0;">Template Debug</h2>

	<h3>Context Type</h3>
	<pre>{{ printf "%T" $ctx }}</pre>

	<h3>Context Dump</h3>
	<pre>{{ debug.Dump $ctx }}</pre>

	{{- with $ctx.Params }}
	<h3>.Params</h3>
	<pre>{{ debug.Dump . }}</pre>
	{{- end }}

	{{- with $ctx.Site }}
	<h3>.Site Type</h3>
	<pre>{{ printf "%T" . }}</pre>

	<h3>.Site.Params</h3>
	<pre>{{ debug.Dump .Params }}</pre>

	{{- with .Menus }}
	<h3>.Site.Menus</h3>
	<pre>{{ debug.Dump . }}</pre>
	{{- end }}

	{{- with .Data }}
	<h3>.Site.Data</h3>
	<pre>{{ debug.Dump . }}</pre>
	{{- end }}
	{{- end }}

	{{- with $ctx.Page }}
	<h3>.Page</h3>
	<pre>{{ debug.Dump . }}</pre>
	{{- end }}

	<h3>Common Fields</h3>
	<pre>
Title: {{ printf "%[1]v (%[1]T)" $ctx.Title }}
Kind: {{ printf "%[1]v (%[1]T)" $ctx.Kind }}
Type: {{ printf "%[1]v (%[1]T)" $ctx.Type }}
Section: {{ printf "%[1]v (%[1]T)" $ctx.Section }}
Layout: {{ printf "%[1]v (%[1]T)" $ctx.Layout }}
Permalink: {{ printf "%[1]v (%[1]T)" $ctx.Permalink }}
RelPermalink: {{ printf "%[1]v (%[1]T)" $ctx.RelPermalink }}
  </pre>
</div>
{{- end -}}

We also define a shortcode that calls the partial in layouts/shortcodes/debug-context.html:

{{- partial "debug-context.html" .Page -}}

And we can then put this into any of our pages:

{{< debug-context >}}
Top

A magic dwells in each beginning

Post by Andreas K. Hüttel via the dilfridge blog »

"A magic dwells in each beginning" / "Und jedem Anfang wohnt ein Zauber inne" - this famous line of the poem "Stufen", written by Hermann Hesse over 80 years ago, comes to my mind when posting a bittersweet update here. After over 16 years of independent research and teaching at the physics department of Universität Regensburg, my Heisenberg grant and with that my last university employment contract runs out.

The years in Regensburg included the Emmy Noether grant of the DFG, being PI in two Collaborative Research Centres (SFB) and one Graduate Research Training Group (GRK), a successful large equipment grant, the habilitation, the Walter Schottky Prize of the German Physical Society, and the Heisenberg scholarship of the DFG with positive evaluation. In addition, I have independently supervised 7 PhD students and 27 MSc or diploma students, and developed and conducted both undergraduate and graduate lectures. Third-party research funding from my grants at Universität Regensburg amounts to approximately € 2.6M, of which € 2.1M was for grants solely within my research group, the rest part of larger collaborations. 

Over the years, my research group in Regensburg has among other things successfully built up ultra-clean carbon nanotube devices for electronics and nano-electromechanics, the fabrication of high-Q superconducting coplanar electronics and milli-Kelvin microwave measurements, the transfer of carbon nanotubes into complex devices, the first ever microwave optomechanics experiment with a carbon nanotube, and Coulomb blockade spectroscopy of MoS2 nanotubes.

The hardest part of all this time was that without a permanent contract ever the options for acquisition of further grants and independent development of collaborations were severely limited. At the same time one builds up a complex experiment and develops cutting edge techniques without clear perspective. And while the mental model for someone below the professorial level to be an independent group leader and grow sustainably in an academic and professional sense does not really exist in the Regensburg physics department, excellence in exactly this is requested and required by the referees of the various grant programs.

With the high-level DFG money, I have never really been "akademisches Prekariat". However, I fully support initiatives such as PD Prekär and #IchBinHanna which try to ameliorate the situation of researchers in Germany not yet on a permanent professorship. In experimental physics, an unpaid PD working alone at home is of course unthinkable.

I enjoyed university research and teaching as well as working with students, establishing a research group, quantum device fabrication, and complex measurements very much. With the "orderly shutdown" of the group comes that still some students need to graduate and that probably some results will still be written up as papers; however, my office has already been cleared out. I would like to thank everyone who was supportive, in particular also Pertti Hakonen and Milena Grifoni. Nevertheless, it's probably at some point now time to make a bonfire out of those lecture notes, go out into the world and do something entirely new. Which, barring surprises, most likely won't involve physics or a university anymore. I'm lucky in the sense that I'm single and have so far no family or kids to care of.

Time for a gap year and some serious world travel.

 


 

 

Top

Und jedem Anfang wohnt ein Zauber inne

Post by Andreas K. Hüttel via the dilfridge blog »

„Und jedem Anfang wohnt ein Zauber inne“ – an diese berühmte Zeile aus dem Gedicht „Stufen“, von Hermann Hesse vor über 80 Jahren verfasst, muß ich denken, wenn ich hier ein bittersüßes Update poste. Nach über 16 Jahren unabhängiger Forschung und Lehre an der Fakultät Physik der Universität Regensburg läuft meine Heisenberg-Stelle und damit mein letzter befristeter Hochschulvertrag aus.

Zu den Jahren in Regensburg gehörten das Emmy-Noether-Stipendium der DFG, die Projektleitung in zwei Sonderforschungsbereichen (SFB) und einer Graduiertenkolleg (GRK), ein erfolgreicher Großgeräteantrag, die Habilitation, der Walter-Schottky-Preis der Deutschen Physikalischen Gesellschaft und das Heisenberg-Stipendium der DFG mit positiver Evaluation. Dazu habe ich eigenverantwortlich 7 Doktoranden und 27 Master- oder Diplomstudenten betreut und Vorlesungen für Bachelor- und Masterstudierende entworfen und abgehalten. Die Drittmittel aus meinen Förderanträgen an der Universität Regensburg belaufen sich auf insgesamt ca. 2,6 Mio. €; davon entfielen 2,1 Mio. € auf Arbeiten ausschließlich innerhalb meiner Forschungsgruppe, der Rest anteilhaft auf größere Kooperationen.

Im Laufe der Jahre hat meine Forschungsgruppe in Regensburg unter anderem erfolgreich ultrareine Kohlenstoffnanoröhren-Bauelemente für Elektronik und Nanoelektromechanik entwickelt, koplanare supraleitende Elektronik mit hohen Qualitätsfaktoren und Millikelvin-Mikrowellenmessungen aufgebaut, Kohlenstoffnanoröhren in komplexe Bauelemente übertragen, das erste Mikrowellen-Optomechanik-Experiment mit einer Kohlenstoffnanoröhre durchgeführt und Coulomb-Blockade-Spektroskopie von MoS2-Nanoröhren gemessen.

Das Schwierigste an der Zeit in Regensburg war, daß ohne unbefristeten Vertrag die Einwerbung weiterer Fördermittel und die unabhängige Entwicklung von Kooperationen nur stark eingeschränkt möglich waren. Gleichzeitig baut man ein komplexes Experiment auf und entwickelt modernste Techniken ohne klare Perspektive. Und während das mentale Modell, daß jemand unterhalb der Professorenebene ein unabhängiger Arbeitsgruppenleiter sein und sich in akademischer und beruflicher Hinsicht nachhaltig weiterentwickeln kann, in der Physik in Regensburg nicht wirklich existiert, wird genau dies von den Gutachtern der verschiedenen Förderprogramme verlangt.

Dank der hochkarätigen DFG-Förderung war ich nie wirklich "akademisches Prekariat". Trotzdem unterstütze ich voll und ganz Initiativen wie PD Prekär und #IchBinHanna, die versuchen, die Situation von Forschern ohne unbefristete Stelle oder Professur in Deutschland zu verbessern. In der experimentellen Physik ist ein unbezahlter Privatdozent, der allein zu Hause arbeitet, natürlich undenkbar.

Forschung und Lehre an der Universität sowie die Arbeit mit Studierenden, der Aufbau einer Forschungsgruppe, die Herstellung von Quantenbauelementen und entsprechende komplexe Messungen haben mir sehr viel Spaß gemacht. Mit dem "geregelten Herunterfahren" der Arbeitsgruppe werden noch ein paar Studenten ihren Abschluß machen, und vielleicht werden auch noch einige Resultate als Papers eingereicht; mein Büro habe ich allerdings schon geräumt. Mein Dank gilt allen, die mich unterstützt haben, insbesondere auch Pertti Hakonen und Milena Grifoni. Dann ist es aber demnächst wohl Zeit für ein Feuerchen aus Vorlesungsskripten, und dafür, hinaus in die Welt zu ziehen und sich mit etwas völlig Neuem zu beschäftigen. Was, von Überraschungen abgesehen, höchstwahrscheinlich nichts mehr mit Physik oder einer Universität zu tun haben wird. Zum Glück bin ich Single und habe soweit keine Familie oder Kinder...

Zeit für eine Auszeit - und für eine Weltreise.



 

 

Top

Moving MySQL databases into MariaDB

Post by Dan Langille via Dan Langille's Other Diary »

I had a problem with MySQL 8.4 recently. Eventually I gave up and resorted to moving to MariaDB. Switching applications because I hit a problem isn’t something I usually do lightly.

Overview

Most of the work won’t be shown here. However, this is an overview:

  • Created a new jail on the host: [12:09 zuul dvl ~] % sudo mkjail create -j mariadb01 -a amd64 -v 15.0-RELEASE
  • Add new DNS entries for that jail
  • Did the Ansible bootstrap stuff in the jail so Ansible would run
  • Copied the jail-mysql.yaml Ansible playbook to jail-mariadb.yaml and modified it
  • Built databases/mariadb118-server (which also built databases/mariadb118-client)
  • Added firewall rules so that new jail could also access my subversion repo – that’s still not working; I gave up
  • Configured the rsyncer user to do nightly database dumps and rsync them to the dbclone jail
  • Copied the latest database backups into the new jail for later restores
  • Anything else?

MariaDB configuration

I installed and then enabled MariaDB.

I enabled it (yes, that’s the right service name):

# service mysql-server enable
mysql enabled in /etc/rc.conf

I started it:

# service mysql-server start
Installing MariaDB/MySQL system tables in '/var/db/mysql' ...
OK

To start mariadbd at boot time you have to copy
support-files/mariadb.service to the right place for your system


Two all-privilege accounts were created.
One is root@localhost, it has no password, but you need to
be system 'root' user to connect. Use, for example, sudo mariadb
The second is mysql@localhost, it has no password either, but
you need to be the system 'mysql' user to connect.
After connecting you can set the password, if you would need to be
able to connect as any of these users with a password and without sudo

See the MariaDB Knowledgebase at https://mariadb.com/kb

You can start the MariaDB daemon with:
cd '/usr/local' ; /usr/local/bin/mariadbd-safe --datadir='/var/db/mysql'

You can test the MariaDB daemon with mariadb-test-run.pl
cd '/usr/local/' ; perl mariadb-test-run.pl

Please report any problems at https://mariadb.org/jira

The latest information about MariaDB is available at https://mariadb.org/.

Consider joining MariaDB's strong and vibrant community: https://mariadb.org/get-involved/

Starting mysql.

I ran this. I read it about. Figured it would be good. I know nothing else.

[13:09 zuul-mariadb01 dvl ~/mysql] % sudo mysql_secure_installation
/usr/local/bin/mysql_secure_installation: Deprecated program name. It will be removed in a future release, use 'mariadb-secure-installation' instead

NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB
      SERVERS IN PRODUCTION USE!  PLEASE READ EACH STEP CAREFULLY!

In order to log into MariaDB to secure it, we'll need the current
password for the root user. If you've just installed MariaDB, and
haven't set the root password yet, you should just press enter here.

Enter current password for root (enter for none): 
OK, successfully used password, moving on...

Setting the root password or using the unix_socket ensures that nobody
can log into the MariaDB root user without the proper authorisation.

You already have your root account protected, so you can safely answer 'n'.

Switch to unix_socket authentication [Y/n] n
 ... skipping.

You already have your root account protected, so you can safely answer 'n'.

Change the root password? [Y/n] y
New password: 
Re-enter new password: 
Password updated successfully!
Reloading privilege tables..
 ... Success!


By default, a MariaDB installation has an anonymous user, allowing anyone
to log into MariaDB without having to have a user account created for
them.  This is intended only for testing, and to make the installation
go a bit smoother.  You should remove them before moving into a
production environment.

Remove anonymous users? [Y/n] y
 ... Success!

Normally, root should only be allowed to connect from 'localhost'.  This
ensures that someone cannot guess at the root password from the network.

Disallow root login remotely? [Y/n] y
 ... Success!

By default, MariaDB comes with a database named 'test' that anyone can
access.  This is also intended only for testing, and should be removed
before moving into a production environment.

Remove test database and access to it? [Y/n] y
 - Dropping test database...
 ... Success!
 - Removing privileges on test database...
 ... Success!

Reloading the privilege tables will ensure that all changes made so far
will take effect immediately.

Reload privilege tables now? [Y/n] y
 ... Success!

Cleaning up...

All done!  If you've completed all of the above steps, your MariaDB
installation should now be secure.

Thanks for using MariaDB!

Creating and restoring the database

Create the database:

[13:15 zuul-mariadb01 dvl ~/mysql] % mysql -u root -p
Enter password: 
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 14
Server version: 11.8.6-MariaDB FreeBSD Ports

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

root@localhost [(none)]> create database wordpress_danlangilleorg;
Query OK, 1 row affected (0.000 sec)

root@localhost [(none)]> ^DBye

Populating that new database:

[13:16 zuul-mariadb01 dvl ~/mysql] % mysql -u root -p wordpress_danlangilleorg < wordpress_danlangilleorg.sql
Enter password: 

That was easy.

Creating the database users

[13:17 zuul-mariadb01 dvl ~/mysql] % mysql -u root -p mysql                                                  
Enter password: 
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 16
Server version: 11.8.6-MariaDB FreeBSD Ports

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

root@localhost [mysql]> grant usage on wordpress_danlangilleorg.* to 'wordpress'@'10.8.0.7' identified by 'foo';
Query OK, 0 rows affected (0.096 sec)

Oh, there's more, I found out later:

root@localhost [(none)]> GRANT SELECT, INSERT, UPDATE, DELETE ON wordpress_danlangilleorg.* to 'wordpress'@'10.80.0.97' ;
Query OK, 0 rows affected (0.014 sec)

That's all

After that, this blog was back online (after some wordpress configuration changes to point it at the new location).

Then, I was able to write this post.

More blogs to follow. They need to be migrated over here too.

Top

Getting ready for the Cyber Resilience Act

Post by FreeBSD Foundation via FreeBSD Foundation »

Please note that the information provided in this document is for informational purposes only and does not constitute legal advice.

The FreeBSD Foundation has launched its Cyber Resilience Act (CRA) Readiness project for 2026 to prepare the Foundation, the FreeBSD Project, and the broader FreeBSD community for the European Union’s landmark cybersecurity legislation. This post provides context for why we are investing in this work now and information about what the project will cover.

The EU is regulating software security

The EU’s CRA (REGULATION (EU) 2024/2847) represents one of the most significant pieces of software security legislation in recent history. It places commercial software into a regulatory framework for security and, if products are found to be non-compliant, it specifies fines of up to 15 million euros or 2.5 % of a company’s total worldwide annual turnover, whichever is higher. 

Manufacturers are required to manage the security risks posed by their supply chain

For manufacturers the CRA mandates essential cybersecurity requirements with which they must comply during the design, development and production of their products. 

These requirements fall into two main categories, each of which has stringent enforcement:

  1. Products must be secure by default. 

Manufacturers must actively manage the cybersecurity risk across the full lifecycle of their products. They must document how they are doing this (including providing an SBOM) and make it available to the market surveillance authorities. They must exercise due diligence in their use of 3rd-party components such as open source projects. They must provide a declaration of conformity with the CRA on the basis of having carried out an accepted conformity assessment procedure.

  1. Vulnerabilities must be rapidly reported.

Once the product is on the market, manufacturers must act quickly when actively exploited vulnerabilities are discovered. Notifications must be reported on the CRA Single Reporting Platform with deadlines as follows: 24 hours for an early warning notification, and 72 hours for the main notification. Further reporting deadlines also apply within a 14-28 day window.

There is a lot more detail provided in the CRA itself, but these headlines are enough to give a clue to the sorts of activities that manufacturers will soon start doing to meet the CRA requirements.

The CRA is already law, with staged compliance deadlines

The CRA entered into force on 10 December 2024. Its essential cybersecurity requirements (including ‘secure by default’ principles) will start applying from 11 December 2027. Products placed on the market before 11 December 2027 are only subject to these requirements if they undergo substantial modification after that date. 

Reporting obligations come into force on 11 September 2026. Reporting obligations apply to all products with digital elements available in the EU market, including those already on the market before 11 December 2027.

Open source projects have limited CRA responsibilities, but face both opportunities and risks

Open source projects have limited responsibilities

Many in the open source community have been watching this coming down the tracks. The first iteration of the CRA did not mention open source at all. Thanks to feedback from many open source contributors, the CRA now contains robust carve-outs of responsibility for open source projects. 

Happily, individuals who contribute to free and open source projects are exempted from all legal responsibilities, even if they are paid to contribute on a project. 

Organizations that are classified as ‘open source stewards’ have limited responsibilities under the CRA and cannot be fined. The FreeBSD Foundation likely falls under this classification. Individuals cannot be stewards.

The risks to an open source project

Does this mean an open source project has nothing to worry about? For open source projects which are used in downstream commercial software products there are some areas where things might get rough once manufacturers start getting serious about CRA compliance.

One example is a “due diligence denial of service attack”. What happens when many manufacturers aim to carry out a due diligence process on components in their SBOM? A project with a large downstream user base might receive a deluge of requests for information. 

Or, how about when an exploited vulnerability is discovered in your open source project? A manufacturer would have to report it within 24h and your project might receive requests for information, and patch submissions (or demands for fixes!) that come with a lot of pressure attached. Does your project have the processes and staffing for this?

Another, more insidious, possibility is that any open source project that is not prepared for the CRA may simply get swapped out or passed over by manufacturers who see a more-compliant option. This could contribute to putting a project on a downward trajectory.  If you are thinking “my project is too hard to swap out”, consider this –  an unprepared project that cannot be easily swapped out might find that the downstream start making SBOMs just for their fork (this could create all sorts of complexities and upstream queries). 

The opportunities for open source projects

It is not all doom and gloom though. The CRA has the potential to change the power dynamics of the open source landscape. Projects that are proactive in preparing for the CRA will be better positioned to forge new relationships with their downstream users. 

When manufacturers need SBOMs, documentation on security processes, and swift vulnerability management responses to avoid eye-watering fines, they may be more incentivized  to support their upstream open source projects. 

Projects could secure funding agreements, gain dedicated security staffing support, or establish formal partnerships with manufacturers.

Open source projects all over the world are working together to figure this out. After all, it’s what we do. 

The FreeBSD Foundation is committed to helping FreeBSD navigate this important change successfully.

The FreeBSD Foundation’s CRA Readiness project

For FreeBSD, getting prepared now means we can take a proactive approach to CRA readiness and leverage as much benefit as possible while reducing potential harms. 

The high-level project goals are: make sure the Foundation fulfills its legal obligations as an open source steward, protect the Project from disruption as manufacturers work to meet CRA requirements, and ensure that our contributors understand they are not personally exposed to legal liability under this legislation.

What we are working on

The project is organized into six workstreams running through 2026:

Security and vulnerability handling 
This is the core of the effort. Foundation staff will work closely with FreeBSD’s Core team, Security team, and Ports Security team, and downstream vendors to develop a shared understanding of CRA responsibilities and to examine how we collectively respond to CRA-related scenarios. This is a sustained, 12-month effort that takes a holistic view of FreeBSD’s security posture, and will result in updated policies, public positions, and documentation where needed.

SBOM toolchain 
This addresses one of the CRA’s most concrete requirements: Software Bills of Materials. Rather than leaving manufacturers to independently generate their own (potentially inaccurate) SBOMs for FreeBSD-based products, we are building a single, authoritative, open-source SBOM toolchain. This work builds on development started in 2025 under our Infrastructure Modernization project. Over the next four months, we will be adding SBOM information files, identifying and filling licensing gaps, and collaborating with upstream projects to improve SBOM metadata. A shared, accurate SBOM solution is better for everyone.

Public documentation 
This will give manufacturers, maintainers, and contributors clear, FreeBSD-specific information about CRA requirements, including emerging processes and key contacts. The content will evolve as our understanding deepens across the other workstreams.

Community legislative engagement 
This will open up a simple communication channel (most likely a mailing list) so that the FreeBSD community can participate in EU policy development. Bodies like CEN, CENELEC, and ETSI regularly seek input from the open source world, and we want to make sure FreeBSD voices are part of that conversation.

A public-facing project repository 
This will serve as the running record of everything we do. We are committed to transparency: detailed updates, outputs, and decision-making will all be documented here as the project progresses.

Communications 
We will keep the broader community informed through blogs, social media, and other channels as we hit key milestones.

A note on scope
The CRA is new legislation, and real-world guidance on implementation continues to evolve. We have designed this project to adapt as our understanding develops, and though the workstreams above reflect our best current thinking, you should expect the details to shift over time. We will keep you informed as they do.

Learn more

The post Getting ready for the Cyber Resilience Act first appeared on FreeBSD Foundation.

Top

Post by FreeBSD Newsflash via FreeBSD News Flash »

New committer: Laurent Chardon (ports)
Top

Valuable News – 2026/03/02

Post by Vermaden via 𝚟𝚎𝚛𝚖𝚊𝚍𝚎𝚗 »

The Valuable News weekly series is dedicated to provide summary about news, articles and other interesting stuff mostly but not always related to the UNIX/BSD/Linux systems. Whenever I stumble upon something worth mentioning on the Internet I just put it here.

Today the amount information that we get using various information streams is at massive overload. Thus one needs to focus only on what is important without the need to grep(1) the Internet everyday. Hence the idea of providing such information ‘bulk’ as I already do that grep(1).

The Usual Suspects section at the end is permanent and have links to other sites with interesting UNIX/BSD/Linux news.

Past releases are available at the dedicated NEWS page.

UNIX

FreeBSD 2025 Q4 Status Report.
https://freebsd.org/status/report-2025-10-2025-12/

FreeBSD Does Not Have WiFi Driver for My Old MacBook. AI Build One for Me.
https://vladimir.varank.in/notes/2026/02/freebsd-brcmfmac/

FreeBSD Parthenope Multi Installer in Lua.
https://gitlab.com/alfix/parthenope

FreeBSD Git Weekly: 2026-02-16 to 2026-02-22.
https://freebsd-git-weekly.tarsnap.net/2026-02-16.html

GhostBSD Plan to Ditch Xorg for XLibre.
https://theregister.com/2026/02/24/ghostbsd_plans_to_adopt_xlibre/

KDE Plasma 6.6 is Not Forcing systemd(1) but Arguments Rage On.
https://theregister.com/2026/02/24/kde_plasma_66/

Red Hat Learning Community Will Decommission on 2026/03/31.
https://learn.redhat.com/t5/Red-Hat-Learning-Community-News/Evolving-how-we-learn-together/ba-p/57899

SonicDE (KDE/Plasma 6.x Fork with X11 Support) on FreeBSD.
https://github.com/sonicde-freebsd

OpenZFS 2.4.1 Released.
https://github.com/openzfs/zfs/releases/tag/zfs-2.4.1

FreeBSDKit is Framework for Building Secure and Capability Aware Applications on FreeBSD.
https://github.com/SwiftBSD/FreeBSDKit

FreeBSD 15 Bridges/VLANs/Jails – Nice!
https://reddit.com/r/freebsd/comments/1r704e0/freebsd_15_bridges_vlans_and_jails_nice/
https://gist.github.com/codeedog/99f69ed1909fe633f6ab7b2d467de0f4

On Jails/VLANS/Trunking – Hurray for if_bridge New vlanfilter Feature.
https://reddit.com/r/freebsd/comments/1pytvnr/on_jails_vlans_and_trunking_hurray_for_if_bridge/

Virtualization Basics.
https://dumrich.github.io/GSoC25-Blog/posts/virtualization-fundamentals/

How QEMU Accelerator Works.
https://dumrich.github.io/GSoC25-Blog/posts/qemu-accel/

Adding Bhyve vmm(4) as Accelerator to QEMU.
https://dumrich.github.io/GSoC25-Blog/posts/qemu-bhyve/

Bhyve Part 1.
https://dumrich.github.io/GSoC25-Blog/posts/bhyve-part-1/

Bhyve Part 2.
https://dumrich.github.io/GSoC25-Blog/posts/bhyve-part-2/

Bhyve Part 3.
https://dumrich.github.io/GSoC25-Blog/posts/bhyve-part-3/

Git Fundamentals.
https://dumrich.github.io/GSoC25-Blog/posts/git-fundamentals/

Latest GhostBSD 26.1-R15.0p2-02-25-11 ISO Available.
https://ci.ghostbsd.org/jenkins/job/stable-15/job/Build%20ISO%20For%20Testing%20Packages/2/

Solaris 11.4 SRU90 – Preserve Boot Environments.
https://c0t0d0s0.org/blog/solaris114preservebootenvironments.html

Solaris 11.4 SRU90 – Limiting Signaling to All.
https://c0t0d0s0.org/blog/limitedsignaling.html

Supplemental Document for AWK.
https://github.com/arnoldrobbins/awksupp

You Just Need Postgres – Stop Managing 7 Databases.
https://youjustneedpostgres.com/

FreeBSD 14.4-RC1 Now Available.
https://lists.freebsd.org/archives/freebsd-stable/2026-February/003883.html

FreeBSD 14.4-RC1 Adds Emacs/Vim and More to DVD Images.
https://phoronix.com/news/FreeBSD-14.4-RC1-Released

ZFS Fast Dedup for Proxmox VE 9.x.
https://klarasystems.com/articles/zfs-fast-dedup-for-proxmox-ve-9x/

FreeBSD pkg autoremove.
https://rubenerd.com/freebsd-pkg-autoremove/

Uplift Privileges on FreeBSD.
https://vermaden.wordpress.com/2026/03/01/uplift-privileges-on-freebsd/

Jails for NetBSD – Kernel Enforced Isolation and Native Resource Control.
https://netbsd-jails.petermann-digital.de/

Running Your Own AS: Going Multi Homed with iBGP and Three Transits.
https://blog.hofstede.it/running-your-own-as-going-multi-homed-with-ibgp-and-three-transits/

How I Used SIGUSR1 to Avoid Python Process Conflicts.
https://ericbsd.com/how-i-used-sigusr1-to-avoid-python-process-conflicts.html

UNIX System V Release 2.0 Programmer Reference Manual BTL Edition. [1983]
https://archive.org/details/unix-system-v-release-2-programmer-reference-manual-btl-edition/

MinIO os Dead. Long Live MinIO.
https://blog.vonng.com/en/db/minio-resurrect/

64bit GNU Hurd is Here.
https://guix.gnu.org/blog/2026/the-64-bit-hurd//

Phoenix and Tailwind on FreeBSD.
https://blog.feld.me/posts/2026/02/phoenix-tailwind-freebsd/

Solaris 2.6 (x86) on 86Box with Socket 7. [1996]
https://officialaptivi.wordpress.com/2026/02/28/solaris-2-6-x86-on-86box-with-socket-7-1996/

Multiple Keyboard Layouts on OpenBSD.
https://tumfatig.net/2026/multiple-keyboard-layouts-on-openbsd/

Another Subprocess for vmd(8) on OpenBSD
https://undeadly.org/cgi?action=article;sid=20260226110600

Day #15 of Rediscovering FreeBSD.
https://tnorlin.se/posts/2026-03-01-day15-of-rediscovering-freebsd/

FreeBSDKit is Framework for Building Secure Capability Aware Applications on FreeBSD.
https://github.com/vIsNotUNIX/FreeBSDKit

Rockhopper Generates Installer Packages for Wide Variety of Platforms.
https://github.com/mcandre/rockhopper

UNIX/Audio/Video

Why Rust is Causing Tension in Linux Kernel.
https://youtube.com/watch?v=-XLuGB0wZ1M

2026-02-26 Bhyve Production User Call.
https://youtube.com/watch?v=jGhKX8kjQKg

2026-02-25 OpenZFS Production User Call.
https://youtube.com/watch?v=aYJ4sWotYho

ReactOS Future is Brighter Than Ever.
https://youtube.com/watch?v=VnLQvqoxXjA

MidnightBSD Responds to California Age Verification Law by Excluding California.
https://youtube.com/live/4qu5-tXVSGw

Sprinkling Little Cinnamon on GhostBSD.
https://youtube.com/watch?v=kaQ7MrB28yQ

BSD Now 652: Ghostly Graphics.
https://www.bsdnow.tv/652

Hardware

Intel ME: Anatomy of Ring -3 Backdoor – Part 1.
https://sbytec.com/vulnerabilities/intel_me/

PDP-11 Replica Kit – Build Your Own DEC PDP-11/70 Computer.
https://obsolescence.dev/pdp11.html

Intel Plans Return to Unified Core Design w/o Performance and Efficiency Cores.
https://techpowerup.com/346645/intel-plans-return-to-unified-core-design-no-more-performance-and-efficiency-core-split

CAN Bootloader.
https://runtimenotes.hashnode.dev/8bytes-is-not-too-bad

Build a Boy – Bricks Gaming Handheld You Build from Scratch.
https://crowdsupply.com/natalie-the-nerd/build-a-boy

Benchmarking 18 Years of Intel Laptop CPUs.
https://phoronix.com/review/intel-penryn-to-panther-lake/

Upgrading My Open Source Pi Surveillance Server with Frigate.
https://jeffgeerling.com/blog/2026/upgrading-my-open-source-pi-surveillance-server-frigate/

Lenovo Made Framework Like Laptop with Modular Ports.
https://theverge.com/tech/886814/lenovo-thinkbook-modular-ai-pc-concept-mwc-2026-specs

Life

179 Euros.
https://my-notes.dragas.net/2026/02/22/179-euros/

Pegasus Spyware – Part 1 – Zero Click Exploitation and Forensic Analysis.
https://sbytec.com/vulnerabilities/pegasus_analysis/

Pegasus Spyware – Part 2 – Forensic Detection and Mitigation Strategies.
https://sbytec.com/vulnerabilities/pegasus_detection/

Swedish Study: Hiring Discrimination is Problem for Men in Female Dominated Occupations.
https://psypost.org/swedish-study-suggests-hiring-discrimination-is-primarily-a-problem-for-men-in-female-dominated-occupations/

New California Law Says All OSes Including Linux Need to Have Some Form of Age Verification at Account Setup.
https://pcgamer.com/software/operating-systems/a-new-california-law-says-all-operating-systems-including-linux-need-to-have-some-form-of-age-verification-at-account-setup/

Other

Firefox 148.0 Now Available with New AI Controls/Kill Switches.
https://phoronix.com/news/Firefox-148

LibreWolf 148.0 Released.
https://codeberg.org/librewolf/bsys6/releases/tag/148.0-1

X86 CPU Made in CSS.
https://lyra.horse/x86css/

SvarDOS Open Source DOS Distribution.
http://svardos.org/

LibreOffice Accuses OnlyOffice of Being Fake Open Source.
https://tech2geek.net/libreoffice-vs-onlyoffice-the-document-foundation-accuses-its-rival-of-being-fake-open-source/

Diablo II LoD: vermaden Necromancer Guide. [2005]
http://strony.toya.net.pl/~vermaden/necromancer.htm

Firefox 149.0 Beta Released with Convenient Split View Mode.
https://phoronix.com/news/Firefox-149-Beta

Pierdology – Explore One of the Most Versatile Polish Swear Words.
https://pierdology.webflow.io/

Servo Browser Engine Starts 2026 with Many Notable Improvements.
https://phoronix.com/news/Servo-January-2026

Myrient that Hosts 390TB Classic Game Archive Shuts Down on 2026/03/01.
https://pbxscience.com/myrient-to-shut-down-march-31-390tb-classic-game-archive-faces-permanent-closure/

Usual Suspects

BSD Weekly.
https://bsdweekly.com/

DiscoverBSD.
https://discoverbsd.com/

BSDSec.
https://bsdsec.net/

DragonFly BSD Digest.
https://dragonflydigest.com/

FreeBSD Patch Level Table.
https://bokut.in/freebsd-patch-level-table/

FreeBSD End of Life Date.
https://endoflife.date/freebsd

Phoronix BSD News Archives.
https://phoronix.com/linux/BSD

OpenBSD Journal.
https://undeadly.org/

Call for Testing.
https://callfortesting.org/

Call for Testing – Production Users Call.
https://youtube.com/@callfortesting/videos

BSD Now Weekly Podcast.
https://www.bsdnow.tv/

Nixers Newsletter.
https://newsletter.nixers.net/entries.php

BSD Cafe Journal.
https://journal.bsd.cafe/

DragonFly BSD Digest – Lazy Reading – In Other BSDs.
https://dragonflydigest.com

BSDTV.
https://bsky.app/profile/bsdtv.bsky.social

FreeBSD Git Weekly.
https://freebsd-git-weekly.tarsnap.net/

FreeBSD Meetings.
https://youtube.com/@freebsdmeetings

BSDJedi.
https://youtube.com/@BSDJedi/videos

RoboNuggie.
https://youtube.com/@RoboNuggie/videos

GaryHTech.
https://youtube.com/@GaryHTech/videos

Sheridan Computers.
https://youtube.com/@sheridans/videos

82MHz.
https://82mhz.net/

EOF
Top

HardenedBSD February 2026 Status Report

Post by HardenedBSD via HardenedBSD »

February saw a few changes in HardenedBSD. The majority of my time was spent chasing down the kernel crash in HardenedBSD 15-STABLE that has been plaguing our users. I worked on narrowing down to a three-day window during which a commit was made that causes the crash.

As I write this, I'm narrowing that down further to the specific commit. I'm hoping to have this resolved this month. If I find and fix the problem this week, I will create new builds for folks to use. Otherwise, the next scheduled regular quarterly build is for 01 Apr 2026.

I appreciate everyone's patience on this. This has been a tricky bug (at times, it fit the description of a "heisenbug"). My spare time is limited (I have a rather large amount of tasks/obligations in everyday ${LIFE} right now), so it has naturally taken a long while to get to this point.

While inbetween clients at my dayjob, I have been granted the opportunity to research meshtastic and other mesh networking projects. I'm getting a lot closer in my censorship- and surveillance-resistant mesh network proof-of-concept. I'm now at the point where I need to port Linux-specific code to HardenedBSD. I'm hoping to get normal tcp/ip packets flowing through Reticulum nodes on the inside of six months. This project, announced in partnership with Protectli one-and-a-half years ago[1], is starting to move along at a nice pace. I will have more to share on that by the next status report.

On Saturday, 28 Feb 2026, I had given my local Hackers N' Hops chapter a little show & tell of Meshtastic, Reticulum, and HardenedBSD. I met with a bunch of really cool hacckers there, and demoed two Reticulum RNodes backed by Reticulum instances on two HardenedBSD laptops. I demoed an exec-over-meshtastic Python script I wrote the day prior. The script is available on Radicle as rad:z44pvAJS7SiQf2CGtpn8hY44GDMyu.

Speaking of Radicle, I plan to migrate some of my personal repos away from our self-hosted GitLab and onto the Radicle network. With time, I'm hoping to migrate us completely towards Radicle. Now would be a good time for those who want to contribute to HardenedBSD to start playing around and experimenting with Radicle.

In src:

  1. Contributor "gmg" hardened the kernel crashdump interface.
  2. Opt zlib kernel module into -ftrivial-var-auto-init=zero.
  3. bsdinstall(8): Align us more closely with FreeBSD.

In ports:

  1. net-p2p/reticulum was updated to 1.1.3_2
  2. Disable PaX PAGEEXEC and PaX NOEXEC for science/zotero
  3. Bring in candidate patch to fix dns/unbound
  4. Hook hardenedbsd/ctrl into the build
  5. 0x1eef added a new port: hardenedbsd/ctrl
  6. Bump ports-mgmt/pkg to 3.5.1_1
  7. 0x1eef updated a port: portzap v2.1.1
  8. 0x1eef updated a port: sourcezap v2.1.1

Once I have figured out what's going on with the 15-STABLE panic and have a proper fix in place, I plan to quickly switch gears towards hbsdfw. I haven't produced a working hbsdfw build in a long time, and it's far past due. After that, I plan to switch right back to the Reticulum research and development.

I'll make sure to keep the community informed of the 15-STABLE findings and fixes.

Top

Embeddings in RAG

Post by Sven Vermeulen via Simplicity is a form of art... »

When I started looking into the architecture of Large Language Models (LLMs), I got confused when I encountered Retrieval Augmented Generation (RAG). Both LLMs themselves and RAG use embeddings (a numerical vector representation of a token) and through its shared terminology, I made the wrong assumption that the embeddings in both are strongly related. It is in fact much simpler, and while both use embeddings, they are unrelated to each other.

Note: I'm still dipping my toes into the world of LLMs (and other generative AI, like diffusion-models for image generation), so my posts might be inaccurate. I welcome any feedback or comments on this.

Embeddings in a large language model

LLMs are trained to predict text given a certain input. The text that is predicted are so-called tokens, small text snippets. These are then added to the input text, and the LLM again predicts the next token, moving forward until it predicted a special token that indicates the end of a text sequence.

Simple view of LLM

Suppose the text at that point is the following:

Two roads diverged in a yellow wood,
and sorry I

You might know this as the start of "The Road Not Taken", a poem by Robert Frost. If the LLM is trained with this poem, it might be able to predict the next tokens. When I ran this as input through Qwen3-VL 8B, one of the more recent open-weights model released by the Qwen team at Alibaba Cloud, it was able to generate parts of the poem further, but eventually strayed off course.

Two roads diverged in a yellow wood,
and sorry I couldn’t travel both
and be one traveler, long I stood.
and looked down one as far as I could
to where it bent in the undergrowth;
Then took the other, as just as fair,
and having perhaps a better claim,
because it was grassy and wanted wear;
though as for that the passing there
had worn them really about the same,

And both that morning equally lay
in leaves no step had trodden black.
And both … The question is — which way does he take? It’s not clear. He says “I took the other”, 
but then says “the passing there had worn them really about the same”. So why did he choose one 
over the other? Is it a matter of chance? Or is there something more symbolic going on?

Note: There is some randomness involved here, other iterations with the same model and input did result in the poem being quoted correctly, followed by an analysis of the poem.

While generating the output, the model generates one part of text at a time. This part of text is called a token, and the LLM has a built-in tokenizer that converts text into tokens, and tokens back into text. For the Qwen3 models, the Qwen tokenizer is used. If I understand its vocabulary correctly, the text "couldn't travel" would be tokenized into:

[ "couldn", "'t", " ", "travel" ]

Different LLMs can use different tokenization methods, but there is a lot of re-use here. Different LLM models can use the same tokenizer.

These tokens are converted into embeddings, which form the foundational representation for use in LLMs. They are numerical vectors that represent those text tokens. LLMs work with these numerical vectors: LLMs (and AI in general) are software systems that perform heavy computational operations, performing many matrix operations with each matrix being a massive set of numbers. Well, text is represented as a huge matrix.

Embeddings are not just a simple index, but are pretrained values. These values enable token mapping based on semantic similarity. When the training material often combines "corona" and "COVID", then these two will have embeddings that allow both terms to be seen as close to each other. But the same is true if there is material combining "corona" and "beer". So the embedding that represents "corona" (assuming it is a single token) would have semantic understanding of both corona being a viral disease (related to COVID-19) as well as an alcoholic beverage.

Unlike tokenizers, which can be reused across different LLM models, the embeddings are unique to each model. Sure, within the same family (e.g. Qwen3) there can be reuse as well, but it is much less common to see this re-use across different families.

The phrase "Two roads" would consist of three tokens ("Two", " ", "roads"), which are converted into a corresponding 4096-dimensional embedding vector during processing. The dimension is fixed for a particular LLM: Qwen3 8B for instance uses embeddings of 4096 numbers. So that start would be a matrix with dimensions 3x4096. The entire text itself thus would be represented by a very large matrix, with one dimension being this embedding size (4096 in my case), the other dimension being the amount of tokens already used as text (both input and generated output).

These matrices are then used as input within the LLM, which then starts doing magic with them (well, not really magic, it's rather maths, multiplying the matrix against other in-LLM stored matrices, iterating over multiple blocks of matrix operations, etc.) to eventually output a (sequence of) embedding(s), which is appended to the input matrix to re-iterate the entire process over and over again.

Embedding-based view of LLM

The maximum amount of tokens that a model can handle is also predefined, although there are methods to extend this. For Qwen3 8B, this is 32768 natively, and 131072 with an extension method called YaRN. So, for the native implementation, that means the maximum text size would be represented as a matrix of dimensions 32768x4096.

Retrieval Augmented Generation

LLMs are trained with a certain set of data, so once it is finished training, it does not have the ability to learn more. To make it more useful, you want the LLM to have access to recent insights. Nowadays, the hype is all about MCP (Model Context Protocol), which is having LLMs trained to understand that they have tools at their disposal, and know how to call these tools (well, in reality, they are trained to generate output that the software which executes the LLM detects, makes a tool output, and adds the outcome of that tool back to the text already generated, allowing the LLM to continue).

Before MCP the world was (and still is) using Retrieval Augmented Generation (RAG). The idea behind RAG is that, before the LLM responds to a user's query (prompt) it also receives new information from external data sources. With both the user query and information from the sources, the LLM is able to generate more useful output.

When I looked at RAG, I noticed it using embeddings as well prior to the actual retrieval, so I wrongfully thought that those are the same embeddings, and that the outcome of the RAG would be an embedding matrix as well, that the LLM then receives and further processes...

Incorrect RAG view

I was misled by documentation on RAGs indicated things like "the data to be referenced is converted into LLM embeddings", and that the technology used for RAG retrieval are vector databases specialized for embedding-based operations. Many online resources also looked at RAG as a complete, singular solution with multiple components. So I jumped into conclusion that these are the same embeddings. But then, that would mean the RAG solution would be tailored to the LLM being used, because other LLM models (like Llama3, or Mistral) use different embedding vocabulary.

Instead, what RAG does, is take the same prompt, convert it into tokens and embeddings (using its own tokenizer/embedding vocabulary) and then uses that to perform a search operation against the data that is added to the RAG database. This data (which is the recent insights or other documents you want your LLM to know about) is also tokenized and converted into embeddings, but it is not those embeddings that are brought back to the main LLM, but the plain text outcome (or other media types that your LLM understands, such as images).

Why does RAG then use embeddings? Wouldn't a simple search engine be sufficient? Well, the RAG's primary advantage is its ability to locate relevant information more effectively through embeddings. Thanks to the embedding representation, the RAG can find information that is related to the user query without relying on keyword matches. You could effectively replace the RAG engine with a simple search - and many LLM-powered software applications do support this. For instance, Koboldcpp which I use to run LLM locally, supports a simple DuckDuckGo-based websearch as well.

RAG view

The use of embeddings for search operations (again, completely independent of the LLM) allows for contextual understanding. When a user prompts for "What are the ingredients for Corona", a simple keyword-based search operation might incorrectly result in findings of COVID-19, whereas in this case the query is about the Corona beer.

These improved search operations are often called "semantic search", as they have a better understanding of the semantics and meanings of text (through the embeddings), resulting in more contextually relevant insights.

When is it "RAG" and when semantic search

Retrieval Augmented Generation is the process of converting the user query, performing a semantic search against the knowledge base, and appending the best results (e.g. top-3 hits in the knowledge base) to the user input text. This completed input text thus contains both the user query, as well as pieces of insights obtained from the semantic search. The LLM uses this additional information for generating better outcomes. This entire pipeline (retrieving context, augmenting the prompt, and then generating output) is what defines "RAG".

I personally see RAG technology-wise being very similar to a regular search: replace the semantic search with a search engine (which underlyingly could also use semantic search anyway) and the outcome is the same. The main difference is that RAG is meant for finding exact truth, information snippets tailored to bring context information accurately, whereas a search engine based retrieval would rather bring snippets of data back.

In the market, RAG also focuses on the management of the semantic search (and vector database), optimizing the data that is added to the knowledge base to be LLM-friendly (shorter pieces of accurate data, rather than fully-indexed complete pages which could easily overload the maximum size that an LLM can handle). It prioritizes efficient data management and insights lifecycle control.

For LLMs, it also provides a bit more nuance. A web search would be presented to the LLM as "The following information can be useful to answer the question", whereas RAG results would be presented as actual insights/context. LLMs might be trained to deal differently with that distinction.

Understanding that the semantic search is independent of the LLM of course makes much more sense. It allows companies or organizations to build up a knowledge base and maintain this knowledge independent of the LLMs. Multiple different LLMs can then use RAG to obtain the latest information from this knowledge base - or you can just use the engine for semantic searches alone, you do not need LLMs to get beneficial searches. Many popular web search engines use semantic search underlyingly (i.e. when they index pages, they also generate the embeddings from it and store those in their own vector databases to improve search results).

When new embedding algorithms emerge that you want to use, you must re-generate the embeddings for the entire knowledge base. But that will most likely occur much, much less frequently than using new LLM models (given the rapid evolution here).

Conclusion

RAG is a feature of the software that runs the LLM, allowing for retrieving contextual information from a curated knowledge base. RAG's use of embeddings is related to its semantic search, not to the same embeddings as those used by the LLM. The contextual information is added to the user prompt as text, and only then 'converted' into the embeddings used by the LLM itself.

Feedback? Comments? Don't hesitate to get in touch on Mastodon.

Images are created in Inkscape, using icons from Streamline (GitHub), released under the CC BY 4.0 license, indexed at OpenSVG.

Top

Who is the Kimwolf Botmaster “Dort”?

Post by Brian Krebs via Krebs on Security »

In early January 2026, KrebsOnSecurity revealed how a security researcher disclosed a vulnerability that was used to build Kimwolf, the world’s largest and most disruptive botnet. Since then, the person in control of Kimwolf — who goes by the handle “Dort” — has coordinated a barrage of distributed denial-of-service (DDoS), doxing and email flooding attacks against the researcher and this author, and more recently caused a SWAT team to be sent to the researcher’s home. This post examines what is knowable about Dort based on public information.

A public “dox” created in 2020 asserted Dort was a teenager from Canada (DOB August 2003) who used the aliases “CPacket” and “M1ce.” A search on the username CPacket at the open source intelligence platform OSINT Industries finds a GitHub account under the names Dort and CPacket that was created in 2017 using the email address jay.miner232@gmail.com.

Image: osint.industries.

The cyber intelligence firm Intel 471 says jay.miner232@gmail.com was used between 2015 and 2019 to create accounts at multiple cybercrime forums, including Nulled (username “Uubuntuu”) and Cracked (user “Dorted”); Intel 471 reports that both of these accounts were created from the same Internet address at Rogers Canada (99.241.112.24).

Dort was an extremely active player in the Microsoft game Minecraft who gained notoriety for their “Dortware” software that helped players cheat. But somewhere along the way, Dort graduated from hacking Minecraft games to enabling far more serious crimes.

Dort also used the nickname DortDev, an identity that was active in March 2022 on the chat server for the prolific cybercrime group known as LAPSUS$. Dort peddled a service for registering temporary email addresses, as well as “Dortsolver,” code that could bypass various CAPTCHA services designed to prevent automated account abuse. Both of these offerings were advertised in 2022 on SIM Land, a Telegram channel dedicated to SIM-swapping and account takeover activity.

The cyber intelligence firm Flashpoint indexed 2022 posts on SIM Land by Dort that show this person developed the disposable email and CAPTCHA bypass services with the help of another hacker who went by the handle “Qoft.”

“I legit just work with Jacob,” Qoft said in 2022 in reply to another user, referring to their exclusive business partner Dort. In the same conversation, Qoft bragged that the two had stolen more than $250,000 worth of Microsoft Xbox Game Pass accounts by developing a program that mass-created Game Pass identities using stolen payment card data.

Who is the Jacob that Qoft referred to as their business partner? The breach tracking service Constella Intelligence finds the password used by jay.miner232@gmail.com was reused by just one other email address: jacobbutler803@gmail.com. Recall that the 2020 dox of Dort said their date of birth was August 2003 (8/03).

Searching this email address at DomainTools.com reveals it was used in 2015 to register several Minecraft-themed domains, all assigned to a Jacob Butler in Ottawa, Canada and to the Ottawa phone number 613-909-9727.

Constella Intelligence finds jacobbutler803@gmail.com was used to register an account on the hacker forum Nulled in 2016, as well as the account name “M1CE” on Minecraft. Pivoting off the password used by their Nulled account shows it was shared by the email addresses j.a.y.m.iner232@gmail.com and jbutl3@ocdsb.ca, the latter being an address at a domain for the Ottawa-Carelton District School Board.

Data indexed by the breach tracking service Spycloud suggests that at one point Jacob Butler shared a computer with his mother and a sibling, which might explain why their email accounts were connected to the password “jacobsplugs.” Neither Jacob nor any of the other Butler household members responded to requests for comment.

The open source intelligence service Epieos finds jacobbutler803@gmail.com created the GitHub account “MemeClient.” Meanwhile, Flashpoint indexed a deleted anonymous Pastebin.com post from 2017 declaring that MemeClient was the creation of a user named CPacket — one of Dort’s early monikers.

Why is Dort so mad? On January 2, KrebsOnSecurity published The Kimwolf Botnet is Stalking Your Local Network, which explored research into the botnet by Benjamin Brundage, founder of the proxy tracking service Synthient. Brundage figured out that the Kimwolf botmasters were exploiting a little-known weakness in residential proxy services to infect poorly-defended devices — like TV boxes and digital photo frames — plugged into the internal, private networks of proxy endpoints.

By the time that story went live, most of the vulnerable proxy providers had been notified by Brundage and had fixed the weaknesses in their systems. That vulnerability remediation process massively slowed Kimwolf’s ability to spread, and within hours of the story’s publication Dort created a Discord server in my name that began publishing personal information about and violent threats against Brundage, Yours Truly, and others.

Dort and friends incriminating themselves by planning swatting attacks in a public Discord server.

Last week, Dort and friends used that same Discord server (then named “Krebs’s Koinbase Kallers”) to threaten a swatting attack against Brundage, again posting his home address and personal information. Brundage told KrebsOnSecurity that local police officers subsequently visited his home in response to a swatting hoax which occurred around the same time that another member of the server posted a door emoji and taunted Brundage further.

Dort, using the alias “Meow,” taunts Synthient founder Ben Brundage with a picture of a door.

Someone on the server then linked to a cringeworthy (and NSFW) new Soundcloud diss track recorded by the user DortDev that included a stickied message from Dort saying, “Ur dead nigga. u better watch ur fucking back. sleep with one eye open. bitch.”

“It’s a pretty hefty penny for a new front door,” the diss track intoned. “If his head doesn’t get blown off by SWAT officers. What’s it like not having a front door?”

With any luck, Dort will soon be able to tell us all exactly what it’s like.

Update, 10:29 a.m.: Jacob Butler responded to requests for comment, speaking with KrebsOnSecurity briefly via telephone. Butler said he didn’t notice earlier requests for comment because he hasn’t really been online since 2021, after his home was swatted multiple times. He acknowledged making and distributing a Minecraft cheat long ago, but said he hasn’t played the game in years and was not involved in Dortsolver or any other activity attributed to the Dort nickname after 2021.

“It was a really old cheat and I don’t remember the name of it,” Butler said of his Minecraft modification. “I’m very stressed, man. I don’t know if people are going to swat me again or what. After that, I pretty much walked away from everything, logged off and said fuck that. I don’t go online anymore. I don’t know why people would still be going after me, to be completely honest.”

When asked what he does for a living, Butler said he mostly stays home and helps his mom around the house because he struggles with autism and social interaction. He maintains that someone must have compromised one or more of his old accounts and is impersonating him online as Dort.

“Someone is actually probably impersonating me, and now I’m really worried,” Butler said. “This is making me relive everything.”

But there are issues with Butler’s timeline. For example, Jacob’s voice in our phone conversation was remarkably similar to the Jacob/Dort whose voice can be heard in this Sept. 2022 Clash of Code competition between Dort and another coder (Dort lost). At around 6 minutes and 10 seconds into the recording, Dort launches into a cursing tirade that mirrors the stream of profanity in the diss rap that Dortdev posted threatening Brundage. Dort can be heard again at around 16 minutes; at around 26:00, Dort threatens to swat his opponent.

Butler said the voice of Dort is not his, exactly, but rather that of an impersonator who had likely cloned his voice.

“I would like to clarify that was absolutely not me,” Butler said. “There must be someone using a voice changer. Or something of the sorts. Because people were cloning my voice before and sending audio clips of ‘me’ saying outrageous stuff.”

Top

FreeBSD 14.4-RC1 Available

Post by FreeBSD Newsflash via FreeBSD News Flash »

The first release candidate build for the FreeBSD 14.4 release cycle is now available. ISO images for the amd64, i386, powerpc, powerpc64, powerpc64le, powerpcspe, armv7, aarch64, and riscv64 architectures are FreeBSD mirror sites.
Top

FreeBSD Foundation Q4 2025 Status Update

Post by FreeBSD Foundation via FreeBSD Foundation »

Written as part of the FreeBSD Project’s 4th Quarter 2025 Status Report, check out the highlights of what we did to help FreeBSD last quarter:

The FreeBSD Foundation is a 501(c)(3) non-profit dedicated to advancing FreeBSD through both technical and non-technical support. Funded entirely by donations, the Foundation supports software development, infrastructure, security, and collaboration efforts; organizes events and developer summits; provides educational resources; and represents the FreeBSD Project in legal matters.

Here are some of the ways we supported FreeBSD in the fourth quarter of 2025.

OS Improvements

Throughout the quarter, there were 346 src, 72 ports, and 58 doc commits sponsored by the FreeBSD Foundation.

Refer to the following report entries describing much of that committed development work:

Other highlights include:

  • A new kqueue1(KQUEUE_CPONFORK) facility to copy kqueue into the child on fork

  • exterror(9) infrastructure for asynchronous io and geom

  • libuvmem(3) usermode port of vmem(9)

  • Fixes for anonymous memory corruption

  • Fine-grained support for groups and manual page MFCed to 14

  • Bug fixes

    • A 32-bit mdo(1) on a 64-bit FreeBSD would always fail with EINVAL

    • Panic when using mdo(1) and resource accounting is enabled (kern.racct.enable set to 1)

  • MAC system reviews

  • Kick-off of S4 (hibernate) support

  • Background work on VFS (in particular supporting unionfs changes)

The Foundation also continued to support two major initiatives: the Laptop Support and Usability project (in collaboration with Quantum Leap Research) and an infrastructure modernization project commissioned by the Sovereign Tech Agency. For background on both efforts, see the 2025Q1 quarterly status report.

We began preparing for FreeBSD’s 22nd consecutive participation in Google Summer of Code (GSoC). Those interested in contributing project ideas or mentoring are encouraged to contact soc-admins@FreeBSD.org.

Continuous Integration and Workflow Improvement

As part of our continued support of the FreeBSD Project, the Foundation supports a full-time staff member dedicated to improving the Project’s continuous integration system and test infrastructure.

Advocacy

In the fourth quarter of 2025, our advocacy work focused on expanding our educational video content, reaching more viewers than ever, bringing the community together for a productive Vendor Summit, and reflecting on the work that is helping sustain and grow interest in FreeBSD. Here are just a few of the ways the Foundation advocated for FreeBSD in Q4 2025:

Continuous Integration and Workflow Improvement

The Foundation supports a full-time staff member dedicated to improving the Project’s continuous integration system and test infrastructure.

Legal/FreeBSD IP

The Foundation owns the FreeBSD trademarks, and it is our responsibility to protect them. We also provide legal support for the core team to investigate questions that arise.

Go to https://freebsdfoundation.org to find more about how we support FreeBSD and how we can help you!

The post FreeBSD Foundation Q4 2025 Status Update first appeared on FreeBSD Foundation.

Top

Physics of Data Centers in Space

Post by Kristian Köhntopp via Die wunderbare Welt von Isotopp »

Did it ever occur to you why your computer has a fan, and why that fan usually stays quiet until the machine actually starts doing something?

Compute means Heat

When your laptop sits idle, very little is happening electrically. Modern processors are extremely aggressive about not working unless they have to. Large parts of the chip are clock-gated or power-gated entirely. No clock edges means no switching. No switching means almost no dynamic power use. At idle, a modern CPU is mostly just maintaining state, sipping energy to keep memory alive and respond to interrupts.

The moment real work starts, that changes.

Every clock tick forces millions or billions of transistors to switch, charge and discharge tiny capacitors, and move electrons through resistive paths. That switching energy turns directly into heat. More clock cycles per second means more switching. More switching means more heat. Clock equals work, and work equals heat.

This is why performance and temperature rise together. When you compile code, render video, or train a model, the clock ramps up, voltage often increases, and the chip suddenly dissipates tens or hundreds of watts instead of one or two watts.

The fan turns on because physics makes this unavoidable.

Hot Silicon is inefficient

Even when transistors are not switching, hot silicon still consumes power. As temperature increases, leakage currents increase exponentially. Electrons start slipping through transistors that are supposed to be off. This leakage does no useful work. It simply generates more heat, which increases temperature further, which increases leakage again. This feedback loop is one of the reasons temperature limits exist at all, and ultimately why we have fans – to keep the system under load below this critical temperature.

Above roughly 100ºC, this leakage becomes a serious design concern for modern chips. Silicon melts only above 1400ºC, but efficiency collapses much earlier, at 100ºC.

You spend more and more energy just keeping the circuit alive, not computing. To compensate, designers must lower clock speeds, increase timing margins, or raise voltage, all of which reduce performance per watt.

Reliability also suffers. High temperature accelerates wear mechanisms inside the chip. Metal atoms in interconnects slowly migrate. Insulating layers degrade. Transistors age faster. A chip running hot all the time will not live as long as one kept cooler, even if it technically functions.

Performance depends on Cooling

This is why cooling exists, and why it scales with workload: It exists to keep the chip in a temperature range where switching dominates over leakage, where clocks can run fast without excessive voltage, and where the hardware will still be alive years from now.

In space, where you cannot rely on air or liquid to carry heat away, this tradeoff becomes unavoidable and very visible.

  • Run hotter, and you can radiate heat more easily.
  • Run hotter, and your electronics become slower, leakier, and shorter-lived.

Raditation outside of the Magnetosphere

On Earth, live and electronics get to pretend the universe is gentle: We sit under a magnetic cocoon.

The Earths magnetic field bends and corrals a lot of charged particles that would otherwise slam into the atmosphere and the ground. The polar lights are indicative of particles which hit the upper atmosphere and dump energy there instead of into your laptop or your DNA.

Low Earth Orbit (LEO) is still inside much of that protective bottle. It is not deep space: most of the time you are still inside the magnetosphere, but you pass through regions where trapped particles dip closer to Earth. The South Atlantic Anomaly is the famous example, a patch where satellites see a much higher rate of hits. Operators notice because sensors glitch, memory errors spike, and instruments get noisy.

Go higher and the protection changes. The Van Allen belts are zones of trapped particles shaped by the magnetic field. They sit above typical LEO altitudes and below geostationary orbit.

Geostationary orbit is far outside LEO, and you spend much more time in harsher particle populations and different dose conditions.

Radiation matters because a modern chip is a giant field of tiny, delicate transistors storing tiny amounts of charge. A single energetic particle can change how or even if that works.

  • Bit flips: A particle passes through silicon, leaves a trail of charge, and a memory cell or latch interprets that as a 0 becoming a 1. That is a single event upset. It does not break the chip, it just corrupts state. The usual defense is error detection and correction, ECC in memory, parity, scrubbing, retries, and lots of “trust but verify” in data paths. That defense costs area, power, and latency. You carry more bits than you asked for, you spend cycles checking them, and you sometimes redo work.
  • Latchup and destructive events: Some particle strikes can trigger parasitic structures in CMOS so a section of the chip effectively shorts power to ground, it is stuck at a 0 or a 1 permanently and no longer switches. If you are lucky, the system detects it and power cycles that block. If you are unlucky, you get local overheating and permanent damage. The defense here is design techniques, guard rings, current limiting, fast power cutoffs, and redundancy. Redundancy costs density. You either duplicate blocks so you can route around a failed one, or you accept that some percentage of silicon will be lost over mission life and you overprovision from day one.
  • Total ionizing dose: Ionizing radiation gradually traps charge in insulating layers and at interfaces. Threshold voltages shift, leakage rises, timing changes, noise margins shrink, and eventually the chip that used to pass validation at room temperature starts failing at its corners where radiation hits strongest. This is why space hardware often talks about dose ratings and mission lifetimes, not just “it works today.” The defenses are process choices, device layout choices, and again, guardbands.

How does space hardware survive?

Shielding and Redundant Design.

  • Put material around the electronics, often aluminum as a structural and shielding compromise. It helps, but mass is the tax you pay forever. It also only helps a bit: High energy particles penetrate a lot of material, and some shielding configurations create secondary particles when the primary hits the shield. You can reduce dose and upset rates, you cannot build a perfect bunker without turning your satellite into a brick.
  • A lot of rad hard parts use larger transistors, older process nodes, thicker oxides, and conservative voltages. Larger devices store more charge, so a stray deposit is less likely to flip a bit. Thicker insulators tolerate more ionization. Conservative voltages and clocks give you more timing margin as the device ages. All of this makes the chip slower and bigger for the same function.
  • On top of that, Logic level hardening, software paranoia. ECC everywhere. Voters and triplication for critical state, triple modular redundancy where you do the same computation three times and take the majority result. Watchdogs, reset domains, isolation boundaries, and constant self checking. You get correctness, but you spend transistors and joules on distrust.

Every one of these defenses hits the same three budgets.

  • Area goes up because you replicate circuits and add check bits and voters.

  • Power goes up because more circuitry toggles, and because robust designs often run at higher voltages than bleeding edge consumer silicon.

Speed goes down because checking takes time, retries take time, and wide safety margins force lower clocks.

Space hardened electronics is built for survival, not speed. It is very reliable, and slow as fuck.

What does that mean for AI in Space?

If we hold all that against a H100 GPU as is being used for AI, we can see that this is a lost cause without launching any satellite.

The H100s GH100 die is about 814 mm2 on TSMC 4N with about 80 billion transistors.

This kind of die does not fly into space and survives longer than an afternoon.

A 65nm hardened ASIC technology for Space applications ESA PDF from 2017.

ESA talks about 65nm processes, 16 times larger structures than what powers a H100, 250 times larger area structures, for their space hardened compute.

The same amount of transistors for a H100 would be a 0.2 m^2 slab, which also means that energy goes up and clock goes down, down, down.

Some compute in space runs on 28 nm structures, so a 50x times area increase compared to a H100. That’s 40.000 mm^2 for the same amount of transistors.

Or, in other words, a GPU does not work in space.

At all.

Redundancy costs (Factor 3)

If you try, 80 bn transistors become around 25 bn transistors for overhead (n = 3) and redundant reserve.

So even if you sent a 4 nm node chip into space, it’s no longer an 80 bn monster, but only a relatively modest 25 bn transistor GPU fragment.

Running hotter cools better, to the fourth power. So even relatively modest temperature increases will pay of big time in terms of radiative cooling (100ºC -> 120ºC), but that means more leakage, more power, less clock. So your 25 bn transistors net capacity will calculate slower than on earth.

A GPU does not work in space.

So you have a DC in Space, what now.

We can now talk about bandwidth, latency and spectrum carrying capacity, because we also want to TALK to those data centers from earth, but that’s kind of a moot point already. If it’s a LEO data center, it does move across the sky relatively quickly. It will be in range for a few minutes every 1.5 hours or so. Or you track it as it circles the earth, using up earthbound communication capacity, and paying latency cost.

We can then talk about launch costs, kilograms, lifetime, and the atmospheric effects of that material on re-entry, but it will never come to that.

Top

‘Starkiller’ Phishing Service Proxies Real Login Pages, MFA

Post by Brian Krebs via Krebs on Security »

Most phishing websites are little more than static copies of login pages for popular online destinations, and they are often quickly taken down by anti-abuse activists and security firms. But a stealthy new phishing-as-a-service offering lets customers sidestep both of these pitfalls: It uses cleverly disguised links to load the target brand’s real website, and then acts as a relay between the victim and the legitimate site — forwarding the victim’s username, password and multi-factor authentication (MFA) code to the legitimate site and returning its responses.

There are countless phishing kits that would-be scammers can use to get started, but successfully wielding them requires some modicum of skill in configuring servers, domain names, certificates, proxy services, and other repetitive tech drudgery. Enter Starkiller, a new phishing service that dynamically loads a live copy of the real login page and records everything the user types, proxying the data from the legitimate site back to the victim.

According to an analysis of Starkiller by the security firm Abnormal AI, the service lets customers select a brand to impersonate (e.g., Apple, Facebook, Google, Microsoft et. al.) and generates a deceptive URL that visually mimics the legitimate domain while routing traffic through the attacker’s infrastructure.

For example, a phishing link targeting Microsoft customers appears as “login.microsoft.com@[malicious/shortened URL here].” The “@” sign in the link trick is an oldie but goodie, because everything before the “@” in a URL is considered username data, and the real landing page is what comes after the “@” sign. Here’s what it looks like in the target’s browser:

Image: Abnormal AI. The actual malicious landing page is blurred out in this picture, but we can see it ends in .ru. The service also offers the ability to insert links from different URL-shortening services.

Once Starkiller customers select the URL to be phished, the service spins up a Docker container running a headless Chrome browser instance that loads the real login page, Abnormal found.

“The container then acts as a man-in-the-middle reverse proxy, forwarding the end user’s inputs to the legitimate site and returning the site’s responses,” Abnormal researchers Callie Baron and Piotr Wojtyla wrote in a blog post on Thursday. “Every keystroke, form submission, and session token passes through attacker-controlled infrastructure and is logged along the way.”

Starkiller in effect offers cybercriminals real-time session monitoring, allowing them to live-stream the target’s screen as they interact with the phishing page, the researchers said.

“The platform also includes keylogger capture for every keystroke, cookie and session token theft for direct account takeover, geo-tracking of targets, and automated Telegram alerts when new credentials come in,” they wrote. “Campaign analytics round out the operator experience with visit counts, conversion rates, and performance graphs—the same kind of metrics dashboard a legitimate SaaS [software-as-a-service] platform would offer.”

Abnormal said the service also deftly intercepts and relays the victim’s MFA credentials, since the recipient who clicks the link is actually authenticating with the real site through a proxy, and any authentication tokens submitted are then forwarded to the legitimate service in real time.

“The attacker captures the resulting session cookies and tokens, giving them authenticated access to the account,” the researchers wrote. “When attackers relay the entire authentication flow in real time, MFA protections can be effectively neutralized despite functioning exactly as designed.”

The “URL Masker” feature of the Starkiller phishing service features options for configuring the malicious link. Image: Abnormal.

Starkiller is just one of several cybercrime services offered by a threat group calling itself Jinkusu, which maintains an active user forum where customers can discuss techniques, request features and troubleshoot deployments. One a-la-carte feature will harvest email addresses and contact information from compromised sessions, and advises the data can be used to build target lists for follow-on phishing campaigns.

This service strikes me as a remarkable evolution in phishing, and its apparent success is likely to be copied by other enterprising cybercriminals (assuming the service performs as well as it claims). After all, phishing users this way avoids the upfront costs and constant hassles associated with juggling multiple phishing domains, and it throws a wrench in traditional phishing detection methods like domain blocklisting and static page analysis.

It also massively lowers the barrier to entry for novice cybercriminals, Abnormal researchers observed.

“Starkiller represents a significant escalation in phishing infrastructure, reflecting a broader trend toward commoditized, enterprise-style cybercrime tooling,” their report concludes. “Combined with URL masking, session hijacking, and MFA bypass, it gives low-skill cybercriminals access to attack capabilities that were previously out of reach.”

Top

Valuable News – 2026/02/23

Post by Vermaden via 𝚟𝚎𝚛𝚖𝚊𝚍𝚎𝚗 »

The Valuable News weekly series is dedicated to provide summary about news, articles and other interesting stuff mostly but not always related to the UNIX/BSD/Linux systems. Whenever I stumble upon something worth mentioning on the Internet I just put it here.

Today the amount information that we get using various information streams is at massive overload. Thus one needs to focus only on what is important without the need to grep(1) the Internet everyday. Hence the idea of providing such information ‘bulk’ as I already do that grep(1).

The Usual Suspects section at the end is permanent and have links to other sites with interesting UNIX/BSD/Linux news.

Past releases are available at the dedicated NEWS page.

UNIX

GhostBSD Switches to XLibre Over Wayland.
https://ostechnix.com/ghostbsd-switches-to-xlibre-over-wayland/

Call for Testing KDE Installer Dialogs.
https://lists.freebsd.org/archives/freebsd-desktop/2026-January/007438.html

FreeBSD Git Weekly: 2026-02-09 to 2026-02-15.
https://freebsd-git-weekly.tarsnap.net/2026-02-09.html

Potabi: Technical Analysis of FreeBSD Based Desktop OS.
https://privacylife.info/potabi-technical-analysis-of-the-freebsd-based-desktop-os/

KDE Plasma 6.6 Released.
https://kde.org/announcements/plasma/6/6.6.0/

KDE Plasma 6.6 Released with Many Excellent Improvements.
https://phoronix.com/news/KDE-Plasma-6.6

Howto for FreeBSD 15.0 on Raspberry Pi 5 with NVMe.
https://lists.freebsd.org/archives/freebsd-arm/2026-February/005683.html

GhostBSD to Use XLibre Server and MATE vs. Gershwin Desktop Decision in Future.
https://phoronix.com/news/GhostBSD-Eyes-XLibre

OpenBSD Jumpstart – Anatomy of bsd.rd – No Reboot Required.
https://openbsdjumpstart.org/bsd.rd/

Facilitate Screencasting/Recording with ffmpeg(1) Wrapper fauxstream on OpenBSD.
https://github.com/rfht/fauxstream

Native FreeBSD Kerberos/LDAP with FreeIPA/IDM.
https://vermaden.wordpress.com/2026/02/18/native-freebsd-kerberos-ldap-with-freeipa-idm/

Terminals Should Generate 256 Color Palette.
https://gist.github.com/jake-stewart/0a8ea46159a7da2c808e5be2177e1783

GitLab on FreeBSD Using BastilleBSD Jail. [2023]
https://alfaexploit.com/en/posts/gitlab_on_freebsd/

Undeleted XAA Making X Up to 200x Faster Accelerated Again.
https://patreon.com/posts/undeleted-xaa-x-151028801

FreeBSD 15.0 Linuxulator with CUDA Setup.
https://github.com/isaponsoft/freebsd-ai-notes/blob/main/CUAD_and_llama-server.md

Gentoo on Codeberg.
https://gentoo.org/news/2026/02/16/codeberg.html

FreeBSD AMI ID Pages.
https://daemonology.net/blog/2026-02-19-FreeBSD-AMI-ID-pages.html

New Toy in House for AI/Gaming/Linux/Windows/FreeBSD.
https://peter.czanik.hu/posts/new-toy-in-the-house-for-ai-gaming-linux-windows-freebsd/

Farewell Rust.
https://yieldcode.blog/post/farewell-rust/

Small dbase(1) FreeBSD/Linux Tool to Create/Manage Databases via Command Line.
https://github.com/Pitbasis/dbase

OpenClaw Installation in FreeBSD Jail.
https://github.com/isaponsoft/freebsd-ai-notes/blob/main/openclaw-on-jail.md

Comparison of Cloud Storage Encryption Software.
https://dataswamp.org/~solene/2026-02-19-local-encrypted-volume-comparison.html

Bidirectional OPNsense/pfSense Firewall Configuration Migration/Conversion CLI.
https://github.com/sheridans/pfopn-convert

FreeBSD KDE Desktop Installer Script is Ready for Testing.
https://ostechnix.com/freebsd-kde-installer-call-for-testing-15-1/

HTTP/3 on FreeBSD: Getting QUIC Working with Nginx in Bastille Jail.
https://blog.hofstede.it/http3-on-freebsd-getting-quic-working-with-nginx-in-a-bastille-jail/

Building Hierarchical Jails (Podman x Native Jail) on FreeBSD 15.
https://github.com/isaponsoft/freebsd-ai-notes/blob/main/FreeBSD_jail_on_jail-en.md

FreeBSD 14.4-BETA3 Now Available.
https://lists.freebsd.org/archives/freebsd-stable/2026-February/003866.html

Back to FreeBSD: Part 1.
https://hypha.pub/back-to-freebsd-part-1

Postgres is Your Friend. ORM is Not.
https://hypha.pub/postgres-is-your-friend-orm-is-not

FreeBSD MIT Kerberos Server.
https://vermaden.wordpress.com/2026/02/22/freebsd-mit-kerberos-server/

IPv6 Addresses for OpenBSD vmm(4) Virtual Machines.
https://xosc.org/vmm-ipv6.html

Netbase is Port of NetBSD Utilities to Another UNIX Like Operating Systems.
https://github.com/littlefly365/Netbase

Process Isolation with chroot(2) on NetBSD.
https://overeducated-redneck.net/blurgh/netbsd-chroot-isolation.html

BSD Weekly – Issue 267.
https://bsdweekly.com/issues/267

Using New Bridges of FreeBSD 15.
https://blog.feld.me/posts/2026/02/using-new-bridges-freebsd-15/

Using nsnotifyd(1) with PowerDNS Secondary.
https://blog.feld.me/posts/2026/02/nsnotifyd-with-powerdns-secondary/

Linuxulator on FreeBSD Feels Like Magic.
https://hayzam.com/blog/02-linuxulator-is-awesome/

We Built Our Entire Startup Infra on FreeBSD in 2026. Now We Need to Talk.
https://reddit.com/r/freebsd/comments/1r7mp9n/we_built_our_entire_startup_infra_on_freebsd_in/

UNIX/Audio/Video

OpenBSD Test Livestream: VAAPI on AMD Radeon RX 6700 XT Playing Northgard.
https://spectra.video/w/1smqEby9CkshEQWAthSty9

2026-02-19 Bhyve Production User Call.
https://yout-ube.com/watch?v=8ByyJ8nTtQU

2026-02-18 OpenZFS Production User Call.
https://yout-ube.com/watch?v=z7QKVFs6G3k

GhostBSD Drops Xorg for XLibre.
https://yout-ube.com/watch?v=RdJ2udBG-Og

Xorg Officially Abandons master Branch for main and Throws Away 2 Years of Code.
https://yout-ube.com/watch?v=xjwQXiNhW0E

FreeBSD as Domain Controller – Microsoft Will Not Like This.
https://yout-ube.com/watch?v=GrVDAu-Mcp0

Exploring/Hacking 386BSD – Dad of FreeBSD.
https://yout-ube.com/watch?v=6jfNvIxYyhU

BSD Now 651: Spatially Aware ZFS.
https://www.bsdnow.tv/651

Hardware

WD and Seagate Confirm: Hard Drives for 2026 Sold Out.
https://heise.de/en/news/WD-and-Seagate-confirm-Hard-drives-for-2026-sold-out-11178917.html

ARM Homelab Server – Minisforum MS-R1 Review.
https://sour.coffee/2026/02/20/an-arm-homelab-server-or-a-minisforum-ms-r1-review/

Minisforum Stuffs Entire ARM Homelab in MS-R1. [2025]
https://jeffgeerling.com/blog/2025/minisforum-stuffs-entire-arm-homelab-ms-r1/

This Engine Swapped Six Speed Sedan is M7 That BMW Never Built.
https://petrolicious.com/blogs/articles/this-engine-swapped-six-speed-sedan-is-the-m7-that-bmw-never-built

Your MacBook Has Accelerometer and You Can Read It in Real Time in Python.
https://medium.com/@oli.bourbonnais/your-macbook-has-an-accelerometer-and-you-can-read-it-in-real-time-in-python-28d9395fb180

Rumours Say AMD ZEN6 Ryzen CPU Packs 12 Cores per CCD w/o Requiring Much More Area.
https://club386.com/rumours-say-amd-zen-6-ryzen-cpu-packs-12-cores-per-ccd-without-requiring-much-more-silicon-area/

Life

Use Protocols – Not Services.
https://notnotp.com/notes/use-protocols-not-services/

I Verified My LinkedIn Identity. Here is What I Actually Handed Over.
https://thelocalstack.eu/posts/linkedin-identity-verification-privacy/

I Miss Thinking Hard.
https://jernesto.com/articles/thinking_hard

Other

AI Found 12 New Vulnerabilities in OpenSSL.
https://schneier.com/blog/archives/2026/02/ai-found-twelve-new-vulnerabilities-in-openssl.html

Paged Out #8 Issue from 2026/02.
https://pagedout.institute/download/PagedOut_008.pdf

Mike Brewers Gets Porsche Restored in Poland.
https://yout-ube.com/watch?v=9Xcno1cfNlg

Keep Android Open.
https://keepandroidopen.org/

Keep Android Open.
https://f-droid.org/2026/02/20/twif.html

ISOCD-Win is Replacement for Native Amiga ISOCD App.
https://github.com/fuseoppl/isocd-win

Wikipedia Blacklists archive.today Starts Removing 695,000 Archive Links.
https://arstechnica.com/tech-policy/2026/02/wikipedia-bans-archive-today-after-site-executed-ddos-and-altered-web-captures/

EDuke32 – Duke3D for Windows/Linux/macOS.
https://eduke32.com/

Usual Suspects

BSD Weekly.
https://bsdweekly.com/

DiscoverBSD.
https://discoverbsd.com/

BSDSec.
https://bsdsec.net/

DragonFly BSD Digest.
https://dragonflydigest.com/

FreeBSD Patch Level Table.
https://bokut.in/freebsd-patch-level-table/

FreeBSD End of Life Date.
https://endoflife.date/freebsd

Phoronix BSD News Archives.
https://phoronix.com/linux/BSD

OpenBSD Journal.
https://undeadly.org/

Call for Testing.
https://callfortesting.org/

Call for Testing – Production Users Call.
https://youtube.com/@callfortesting/videos

BSD Now Weekly Podcast.
https://www.bsdnow.tv/

Nixers Newsletter.
https://newsletter.nixers.net/entries.php

BSD Cafe Journal.
https://journal.bsd.cafe/

DragonFly BSD Digest – Lazy Reading – In Other BSDs.
https://dragonflydigest.com

BSDTV.
https://bsky.app/profile/bsdtv.bsky.social

FreeBSD Git Weekly.
https://freebsd-git-weekly.tarsnap.net/

FreeBSD Meetings.
https://youtube.com/@freebsdmeetings

BSDJedi.
https://youtube.com/@BSDJedi/videos

RoboNuggie.
https://youtube.com/@RoboNuggie/videos

GaryHTech.
https://youtube.com/@GaryHTech/videos

Sheridan Computers.
https://youtube.com/@sheridans/videos

82MHz.
https://82mhz.net/

EOF
Top

FreeBSD MIT Kerberos Server

Post by Vermaden via 𝚟𝚎𝚛𝚖𝚊𝚍𝚎𝚗 »

It often starts with a comment – your comment – and it is not different this time.

In the comments section below Native FreeBSD Kerberos/LDAP with FreeIPA/IDM article in one of the places it was shared someone asked why FreeBSD Handbook – Security – Kerberos section does not cover setting up MIT Kerberos … as FreeBSD since FreeBSD 15.0-RELEASE uses MIT Kerberos in its Base System instead of Heimdal implementation … and that is very good question.

MIT KRB5 1.22.1 Kerberos replaces Heimdal 1.5.2 by default. (Sponsored by The FreeBSD Foundation)

I even created a PR:289117 about it some time ago … but nothing changed since.

Encouraged that in the past the FreeBSD Handbook – Jails chapter was reworked also using information from my FreeBSD Jails Containers article – I though that maybe it will also happen this time … and even if not – this article will serve its role before anything related MIT Kerberos server will appear in official FreeBSD Handbook.

 

The Table of Contents will look like that this time.

  • FreeBSD Installation
  • It Was DNS
  • MIT Kerberos Server
  • Summary

Now …

FreeBSD Installation

The install I did was pretty generic and just NextNextNext … in the FreeBSD bsdinstall(8) installer. I have chosen Auto (ZFS) way (but it would work the same on UFS) and then setup static 10.1.1.123/24 IP and kerberos.example.org hostname. I also used PKGBASE but older Distribution Sets setup will also work the same.

It will also work in a Jail (VNET or not) if needed.

This is how /etc/rc.conf file looked like after install.

kerberos # cat /etc/rc.conf
# NETWORK
hostname="kerberos.example.org"
ifconfig_vtnet0="inet 10.1.1.123/24"
defaultrouter="10.1.1.1"

# SERVICES
sshd_enable="YES"
zfs_enable="YES"
syslogd_flags="-ss"

It Was DNS

Before we start setting up Kerberos we need DNS server.

You can use other other that you already have working – but if not – we will install and setup some basic nsd(8) DNS server first.

kerberos # hostname
kerberos.example.org

kerberos # netstat -Win -f inet
Name     Mtu Network      Address       Ipkts Ierrs Idrop    Opkts Oerrs  Coll
vtnet0     - 10.1.1.0/24  10.1.1.123        0     -     -        0     -     -
lo0        - 127.0.0.0/8  127.0.0.1         0     -     -        0     -     -

kerberos # echo nameserver 1.1.1.1 > /etc/resolv.conf

kerberos # mkdir -pv /usr/local/etc/pkg/repos

kerberos # sed s/quarterly/latest/g /etc/pkg/FreeBSD.conf > /usr/local/etc/pkg/repos/FreeBSD.conf

kerberos # pkg install -y nsd

This is what we got.

Now we will create simple DNS config.

kerberos # cat /usr/local/etc/nsd/nsd.conf
server:
  ip-address: 0.0.0.0
  port: 53
  logfile: /var/log/nsd.log

zone:
  name: example.org
  zonefile: example.org.zone

kerberos # cat /usr/local/etc/nsd/example.org.zone
$ORIGIN example.org.
$TTL 86400
@                   IN  SOA kerberos.example.org. admin.example.org. (
                        2026022101 ; serial
                        3600       ; refresh
                        600        ; retry
                        864000     ; expire
                        86400      ; minimum
                        )

                    IN  NS   kerberos.example.org.
kerberos            IN  A    10.1.1.123
*                   IN  A    10.1.1.123
@                   IN  A    10.1.1.123

_kerberos._udp      IN  SRV  01 00 88 kerberos.example.org.
_kerberos._tcp      IN  SRV  01 00 88 kerberos.example.org.
_kpasswd._udp       IN  SRV  01 00 464 kerberos.example.org.
_kerberos-adm._tcp  IN  SRV  01 00 749 kerberos.example.org.
_kerberos           IN  TXT  EXAMPLE.ORG

We can now enable and start our nsd(8) DNS server.

kerberos # service nsd enable
nsd enabled in /etc/rc.conf

kerberos # service nsd start
Starting nsd.

kerberos # nc -w 1 -v -u localhost 53
Connection to localhost 53 port [udp/domain] succeeded!

kerberos # nc -w 1 -v localhost 53
Connection to localhost 53 port [tcp/domain] succeeded!

kerberos # drill @10.1.1.123 kerberos.example.org
;; ->>HEADER<<- opcode: QUERY, rcode: NOERROR, id: 34997
;; flags: qr aa rd ; QUERY: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 0 
;; QUESTION SECTION:
;; kerberos.example.org.        IN      A

;; ANSWER SECTION:
kerberos.example.org.   86400   IN      A       10.1.1.123

;; AUTHORITY SECTION:
example.org.    86400   IN      NS      kerberos.example.org.

;; ADDITIONAL SECTION:

;; Query time: 0 msec
;; SERVER: 10.1.1.123
;; WHEN: Sun Feb 22 12:44:15 2026
;; MSG SIZE  rcvd: 68

Now the KDC.

MIT Kerberos Server

Available Kerberos related settings in /etc/defaults/rc.conf file.

kerberos # awk '/kadmin|kdc/ {print $1}' /etc/defaults/rc.conf
kdc_enable="NO"
kdc_program=""
kdc_flags=""
kdc_restart="NO"
kdc_restart_delay=""
kadmind_enable="NO"
kadmind_program="/usr/libexec/kadmind"

Now we will prepare simple configuration for our MIT Kerberos server.

kerberos # cat /etc/krb5.conf 
[libdefaults]
  default_realm = EXAMPLE.ORG
[realms]
  EXAMPLE.ORG = {
    kdc = kerberos.example.org
    admin_server = kerberos.example.org
  }
[domain_realm]
  .example.org = EXAMPLE.ORG

Next we will enable and start Kerberos services.

kerberos # service kdc enable
kdc enabled in /etc/rc.conf

kerberos # service kadmind enable
kadmind enabled in /etc/rc.conf

kerberos # kdb5_util create -r EXAMPLE.ORG -s
Initializing database '/var/db/krb5kdc/principal' for realm 'EXAMPLE.ORG',
master key name 'K/M@EXAMPLE.ORG'
You will be prompted for the database Master Password.
It is important that you NOT FORGET this password.
Enter KDC database master key: 
Re-enter KDC database master key to verify: 

kerberos # kadmin.local
Authenticating as principal root/admin@EXAMPLE.ORG with password.
kadmin.local:  add_principal root/admin@EXAMPLE.ORG
No policy specified for root/admin@EXAMPLE.ORG; defaulting to no policy
Enter password for principal "root/admin@EXAMPLE.ORG": 
Re-enter password for principal "root/admin@EXAMPLE.ORG": 
Principal "root/admin@EXAMPLE.ORG" created.
kadmin.lexitocal:  listprincs
K/M@EXAMPLE.ORG
kadmin/admin@EXAMPLE.ORG
kadmin/changepw@EXAMPLE.ORG
krbtgt/EXAMPLE.ORG@EXAMPLE.ORG
root/admin@EXAMPLE.ORG
kadmin.local:  exit

kerberos # cat /var/db/krb5kdc/kadm5.acl
*/admin@EXAMPLE.COM  *

kerberos # service kdc start
Starting kdc.

kerberos # service kadmind start
Starting kadmind.

kerberos # kinit root/admin
kinit: Cannot contact any KDC for realm 'EXAMPLE.ORG' while getting initial credentials

Above error appeared because we still use the 1.1.1.1 DNS server – it was needed only for packages installation – we will now switch to our own 10.1.1.123 DNS server.

kerberos # echo nameserver 10.1.1.123 > /etc/resolv.conf

kerberos # kinit root/admin
Password for root/admin@EXAMPLE.ORG: 

kerberos # klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: root/admin@EXAMPLE.ORG

Valid starting     Expires            Service principal
02/22/26 05:38:37  02/23/26 05:38:37  krbtgt/EXAMPLE.ORG@EXAMPLE.ORG

kerberos # kadmin.local 
Authenticating as principal root/admin@EXAMPLE.ORG with password.
kadmin.local:  exit

kerberos # kadmin
Authenticating as principal root/admin@EXAMPLE.ORG with password.
Password for root/admin@EXAMPLE.ORG: 
kadmin:  exit

Seems to work properly.

We can add some principal or host for a short test.

kerberos # kadmin.local 
Authenticating as principal root/admin@EXAMPLE.ORG with password.
kadmin.local:  add_principal vermaden
No policy specified for vermaden@EXAMPLE.ORG; defaulting to no policy
Enter password for principal "vermaden@EXAMPLE.ORG": 
Re-enter password for principal "vermaden@EXAMPLE.ORG": 
Principal "vermaden@EXAMPLE.ORG" created.
kadmin.local:  get_principal vermaden
Principal: vermaden@EXAMPLE.ORG
Expiration date: [never]
Last password change: Sun Feb 22 05:42:22 UTC 2026
Password expiration date: [never]
Maximum ticket life: 1 day 00:00:00
Maximum renewable life: 0 days 00:00:00
Last modified: Sun Feb 22 05:42:23 UTC 2026 (root/admin@EXAMPLE.ORG)
Last successful authentication: [never]
Last failed authentication: [never]
Failed password attempts: 0
Number of keys: 2
Key: vno 1, aes256-cts-hmac-sha1-96
Key: vno 1, aes128-cts-hmac-sha1-96
MKey: vno 1
Attributes:
Policy: [none]
kadmin.local:  exit

kerberos # kdestroy -A

kerberos # kinit vermaden
Password for vermaden@EXAMPLE.ORG: 

kerberos # klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: vermaden@EXAMPLE.ORG

Valid starting     Expires            Service principal
02/22/26 05:45:52  02/23/26 05:45:52  krbtgt/EXAMPLE.ORG@EXAMPLE.ORG

… and now host for a test.

kerberos # kadmin.local
Authenticating as principal root/admin@EXAMPLE.ORG with password.
kadmin.local:  add_principal host/myserver.example.org
No policy specified for host/myserver.example.org@EXAMPLE.ORG; defaulting to no policy
Enter password for principal "host/myserver.example.org@EXAMPLE.ORG": 
Re-enter password for principal "host/myserver.example.org@EXAMPLE.ORG": 
Principal "host/myserver.example.org@EXAMPLE.ORG" created.
kadmin.local:  ktadd -k /root/myserver.example.org host/myserver.example.org
Entry for principal host/myserver.example.org with kvno 2, encryption type aes256-cts-hmac-sha1-96 added to keytab WRFILE:/root/myserver.example.org.
Entry for principal host/myserver.example.org with kvno 2, encryption type aes128-cts-hmac-sha1-96 added to keytab WRFILE:/root/myserver.example.org.
kadmin.local:  exit

kerberos # strings -t d -n 5 /root/myserver.example.org
     10 EXAMPLE.ORG
     29 myserver.example.org
    106 EXAMPLE.ORG
    125 myserver.example.org

kerberos # k5srvutil -f /root/myserver.example.org list
Keytab name: FILE:/root/myserver.example.org
KVNO Principal
---- --------------------------------------------------------------------------
2 host/myserver.example.org@EXAMPLE.ORG
2 host/myserver.example.org@EXAMPLE.ORG


Seems to work as desired.

Summary

These are all the services running and listening.

kerberos # ps aux | grep -e kdc -e kadmind -e nsd -e RSS
USER   PID  %CPU %MEM   VSZ   RSS TT  STAT STARTED      TIME COMMAND
nsd  75620   0.0  0.6 74108 13040  -  Ss   05:34     0:00.06 nsd: xfrd (nsd)
nsd  75737   0.0  2.0 55312 42088  -  S    05:34     0:00.07 nsd: main (nsd)
nsd  75868   0.0  2.1 79888 42488  -  I    05:34     0:00.00 nsd: server 1 (nsd)
root 82997   0.0  0.5 22972 10636  -  Ss   05:35     0:00.01 /usr/libexec/krb5kdc
root 90942   0.0  0.5 22936 10620  -  Ss   05:35     0:00.01 /usr/libexec/kadmind
root  1468   0.0  0.1 14164  2680  0  S+   05:44     0:00.00 grep -e kdc -e kadmind -e nsd -e RSS

kerberos # sockstat -l4
USER COMMAND      PID FD PROTO LOCAL ADDRESS         FOREIGN ADDRESS      
root kadmind    90942  9 udp4  *:464                 *:*                  
root kadmind    90942 11 tcp4  *:464                 *:*                  
root kadmind    90942 13 tcp4  *:749                 *:*                  
root krb5kdc    82997  9 udp4  *:88                  *:*                  
root krb5kdc    82997 11 tcp4  *:88                  *:*                  
nsd  nsd        75868  4 udp4  *:53                  *:*                  
nsd  nsd        75868  5 tcp4  *:53                  *:*                  
nsd  nsd        75737  4 udp4  *:53                  *:*                  
nsd  nsd        75737  5 tcp4  *:53                  *:*                  
nsd  nsd        75620  4 udp4  *:53                  *:*                  
nsd  nsd        75620  5 tcp4  *:53                  *:*                  
root sshd       50118  7 tcp4  *:22                  *:*      

After all the changes this is how final /etc/rc.conf file looks like.

kerberos # cat /etc/rc.conf
# NETWORK
hostname="kerberos.example.org"
ifconfig_vtnet0="inet 10.1.1.123/24"
defaultrouter="10.1.1.1"

# SERVICES
sshd_enable="YES"
zfs_enable="YES"
syslogd_flags="-ss"
kdc_enable="YES"
kadmind_enable="YES"
nsd_enable="YES"

Now – this article was about how to setup a basic MIT Kerberos server on FreeBSD – not a complete guide on how to configure and use a Kerberos server – for that I send you to the official MIT Kerberos Documentation available here.

EOF
Top

nagios03: drive recovery

Post by Dan Langille via Dan Langille's Other Diary »

After zpool upgrade blocked by gpart: /dev/da0p1: not enough space, I’ve decided to create a new Azure VM, snapshot the now-faulty-drive, attach it to the host, and start zfs replication to copy the data to new drive. Or something like that. The existing drive needs to be imported with a checkpoint rollback, then copied to a drive with different partition sizes.

Here’s the new host:

dvl@nagios03-recovery:~ $ gpart show
=>      34  62984125  da0  GPT  (30G)
        34      2014       - free -  (1.0M)
      2048       348    1  freebsd-boot  (174K)
      2396     66584    2  efi  (33M)
     68980  62914560    3  freebsd-zfs  (30G)
  62983540       619       - free -  (310K)

=>      40  33554352  da1  GPT  (16G)
        40  29360064    1  freebsd-ufs  (14G)
  29360104   4194288    2  freebsd-swap  (2.0G)

dvl@nagios03-recovery:~ $ 

My first impression: why only 174K for the boot partition? Then I saw the efi partition. I’m not familiar with this layou. I’ve only seen one or the other before.

A copy of the faulty drive has been created: nagios03-copy-for-checkpoint-rewind

This is the drive being attached:

Feb 19 21:45:57 freebsd kernel: da2 at storvsc1 bus 0 scbus1 target 0 lun 0
Feb 19 21:45:57 freebsd kernel: da2: <Msft Virtual Disk 1.0> Fixed Direct Access SPC-3 SCSI device
Feb 19 21:45:57 freebsd kernel: da2: 300.000MB/s transfers
Feb 19 21:45:57 freebsd kernel: da2: Command Queueing enabled
Feb 19 21:45:57 freebsd kernel: da2: 32768MB (67108864 512 byte sectors)
Feb 19 21:45:57 freebsd kernel: (da2:storvsc1:0:0:0): CACHE PAGE TOO SHORT data len 15 desc len 0
Feb 19 21:45:57 freebsd kernel: (da2:storvsc1:0:0:0): Mode page 8 missing, disabling SYNCHRONIZE CACHE

But:

dvl@nagios03-recovery:~ $ zpool import --rewind-to-checkpoint zroot newzroot
cannot import 'zroot': no such pool available

dvl@nagios03-recovery:~ $ gpart show da2
gpart: No such geom: da2.

dvl@nagios03-recovery:~ $ sudo diskinfo -v /dev/da2
/dev/da2
	512         	# sectorsize
	34359738368 	# mediasize in bytes (32G)
	67108864    	# mediasize in sectors
	4096        	# stripesize
	0           	# stripeoffset
	4177        	# Cylinders according to firmware.
	255         	# Heads according to firmware.
	63          	# Sectors according to firmware.
	Msft Virtual Disk	# Disk descr.
	            	# Disk ident.
	storvsc1    	# Attachment
	Yes         	# TRIM/UNMAP support
	Unknown     	# Rotation rate in RPM
	Not_Zoned   	# Zone Mode

NOTE: I later realized I need sudo on that import.

Let’s try creating the drive again.

Feb 19 21:55:23 freebsd kernel: da3 at storvsc1 bus 0 scbus1 target 0 lun 1
Feb 19 21:55:23 freebsd kernel: da3: <Msft Virtual Disk 1.0> Fixed Direct Access SPC-3 SCSI device
Feb 19 21:55:23 freebsd kernel: da3: 300.000MB/s transfers
Feb 19 21:55:23 freebsd kernel: da3: Command Queueing enabled
Feb 19 21:55:23 freebsd kernel: da3: 32768MB (67108864 512 byte sectors)
Feb 19 21:55:23 freebsd kernel: (da3:storvsc1:0:0:1): CACHE PAGE TOO SHORT data len 15 desc len 0
Feb 19 21:55:23 freebsd kernel: (da3:storvsc1:0:0:1): Mode page 8 missing, disabling SYNCHRONIZE CACHE
Feb 19 21:55:23 freebsd kernel: GEOM: da3: the secondary GPT header is not in the last LBA.

And:

dvl@nagios03-recovery:~ $ gpart show da3
=>      34  62984125  da3  GPT  (32G) [CORRUPT]
        34      2014       - free -  (1.0M)
      2048       345    1  freebsd-boot  (173K)
      2393     66584    2  efi  (33M)
     68977  62914560    3  freebsd-zfs  (30G)
  62983537       622       - free -  (311K)

There. That’s better. Let’s try the import.

But first:

dvl@nagios03-recovery:~ $ sudo gpart recover da3
da3 recovered
dvl@nagios03-recovery:~ $ gpart show da3
=>      40  67108784  da3  GPT  (32G)
        40      2008       - free -  (1.0M)
      2048       345    1  freebsd-boot  (173K)
      2393     66584    2  efi  (33M)
     68977  62914560    3  freebsd-zfs  (30G)
  62983537   4125287       - free -  (2.0G)

dvl@nagios03-recovery:~ $ 

So now:

dvl@nagios03-recovery:~ $ zpool import --rewind-to-checkpoint zroot newzroot
cannot import 'zroot': no such pool available
dvl@nagios03-recovery:~ $ zpool list
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
zroot  29.5G  3.26G  26.2G        -         -      -    11%  1.00x    ONLINE  -
dvl@nagios03-recovery:~ $ zpool import
cannot discover pools: permission denied

Ahh!

dvl@nagios03-recovery:~ $ sudo zpool import --rewind-to-checkpoint zroot newzroot
cannot import 'zroot': pool was previously in use from another system.
Last accessed by nagios03.unixathome.org (hostid=0) at Thu Feb 19 21:51:50 2026
The pool can be imported, use 'zpool import -f' to import the pool.

Good.

dvl@nagios03-recovery:~ $ sudo zpool import -f --rewind-to-checkpoint zroot newzroot
dvl@nagios03-recovery:~ $ sudo zpool import -f --rewind-to-checkpoint zroot newzroot
dvl@nagios03-recovery:~ $ zpool status newzroot
No such file or directory
dvl@nagios03-recovery:~ $ zpool list
No such file or directory
dvl@nagios03-recovery:~ $ zfs list
No such file or directory
dvl@nagios03-recovery:~ $ 

Well, that seems to have screwed everything. I know why. The new zpool is mounted. I should have added -N (Import the pool without mounting any file systems.)

This is not the first time, nor the last time I have forgotten this detail.

I power cycled the VM. Which didn’t work. I detached the two extra data disk (attempt 1 and attempt 2).

And I can’t get logged in again. The console doesn’t help much.

Oh, you have to click Apply after removing the drives.

Now it works:

dvl@nagios03-recovery:~ $ sudo zpool import -N --rewind-to-checkpoint newzroot
cannot import 'newzroot': checkpoint does not exist
	Destroy and re-create the pool from
	a backup source.
dvl@nagios03-recovery:~ $ sudo zpool import -N newzroot
dvl@nagios03-recovery:~ $ zpool status
  pool: newzroot
 state: ONLINE
status: Some supported and requested features are not enabled on the pool.
	The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
	the pool may no longer be accessible by software that does not support
	the features. See zpool-features(7) for details.
config:

	NAME                                          STATE     READ WRITE CKSUM
	newzroot                                      ONLINE       0     0     0
	  gptid/4a28c004-1f4f-11ef-ae18-002590ec5bf2  ONLINE       0     0     0

errors: No known data errors

  pool: zroot
 state: ONLINE
config:

	NAME        STATE     READ WRITE CKSUM
	zroot       ONLINE       0     0     0
	  da0p3     ONLINE       0     0     0

errors: No known data errors
dvl@nagios03-recovery:~ $ 

So the copied drive is there, and I can access if I need to.

However

To be fair, I can reconstruct this VM from Ansible. I think that may be less work than this. Create a new VM, install everything there.

I think that’s less work then trying to copy …. oh wait.

Now that I have the drive data, I can create another drive, partition it nicely, copy the data from this drive, profit.

Perhaps tomorrow.

Actually (the above was written a few days ago): I think I’ll create a new drive and copy everything over from the old drive. Easy.

I’ll post that soon, but I’ve been distracted by PostgreSQL 18 upgrades.

Top

Upgrading PostgreSQL in place on FreeBSD

Post by Dan Langille via Dan Langille's Other Diary »

I’ve updated one of my PostgreSQL instances to PostgreSQL 18, it’s time to update the others. This time, I’m going to try pg_update. My usual approach is pg_dump and pg_restore.

As this is my first attempt doing this, I’m posting this mostly for future reference when I try this again. There will be another blog post when I try this again. Which should be soon. This paragraph will link to that post when it is available.

In this post:

  • FreeBSD 15.0
  • PostgreSQL 16.12 (pg03)
  • PostgreSQL 18.2 (pg02)

The names in (brackets) are the names of the jail in question.

If you’re upgrading in place, and not copying data around like me, skip down until you see Saving the old binaries.

I’m reading http://www.unibia.com/unibianet/freebsd/upgrading-between-major-versions-postgresql-freebsd and thinking this might work well for me.

The overview of upgrade-in-place

The PostgreSQL upgrade-in-place needs these main parts:

  1. The old binaries (e.g. postgresql16-server-16.12.pkg)
  2. The new binaries (postgresql18-server-18.2.pkg)
  3. The old data (/var/db/postgres/data16)

Keep that in mind as I go through this. We can’t install both packages at once, so we’ll untar the old package into a safe location.

How you get that package: up to you. Try /var/cache/pkg, or the FreeBSD package servers, or (while you still have the old package), run pkg create postgresql16-server (for example).

My data

Ignore this section if you have the data. For me, I’m testing this process, and I’m documenting this part here.

This is how the data is laid out. My idea: snapshot line 7 and use it in line 12.

[18:23 r730-01 dvl ~] % zfs list | grep pg
data02/jails/pg01                                                   34.9G   175G  10.8G  /jails/pg01
data02/jails/pg02                                                   12.7G   175G  11.6G  /jails/pg02
data02/jails/pg03                                                   11.5G   175G  10.8G  /jails/pg03
data03/pg01                                                         75.7G  5.47T    96K  none
data03/pg01/freshports.dvl                                          37.1G  5.47T  27.6G  /jails/pg01/var/db/postgres.freshports.dvl
data03/pg01/postgres                                                38.7G  5.47T  28.1G  /jails/pg01/var/db/postgres
data03/pg02                                                         78.5G  5.47T    88K  none
data03/pg02/postgres                                                78.5G  5.47T  51.8G  /jails/pg02/var/db/postgres
data03/pg02/rsyncer                                                 1.02M  5.47T   144K  /jails/pg02/usr/home/rsyncer/backups
data03/pg03                                                          769G  5.47T    88K  none
data03/pg03/postgres                                                 570G  5.47T   448G  /jails/pg03/var/db/postgres
data03/pg03/rsyncer                                                  199G  5.47T  33.2G  /jails/pg03/usr/home/rsyncer/backups
data03/poudriere/ports/pgeu_system                                  1.06G  5.47T  1.06G  /usr/local/poudriere/ports/pgeu_system

The database is on a separate filesystem from the jail. Why? For situations just like this.

Note: I’m snapshotting a live-in-use database. That’s not always ideal. However, for this trial proof-of-concept, I’m content to accept that.

Clone, copy, and disable

As with the previous section, you can skip this one if you’re not mucking around copying data from instance to another.

[18:23 r730-01 dvl ~] % sudo zfs snapshot data03/pg03/postgres@for.copy.1
[18:34 r730-01 dvl ~] % sudo service jail stop pg02
Stopping jails: pg02.

[18:36 r730-01 dvl ~] % sudo zfs rename data03/pg02/postgres data03/pg02/postgres.original
[18:36 r730-01 dvl ~] % sudo zfs set canmount=off data03/pg02/postgres.original
[18:36 r730-01 dvl ~] % sudo zfs clone data03/pg03/postgres@for.copy.1 data03/pg02/postgres
[18:43 r730-01 dvl ~] % sudo zfs set mountpoint=/jails/pg02/var/db/postgres data03/pg02/postgres

[18:37 r730-01 dvl ~] % sudoedit /jails/pg02/etc/rc.conf          

That sudoedit is me setting postgresql_enable=”NO” in /etc/rc.conf so it doesn’t start up with the new data, just yet.

Then I started the jail back up:

[18:44 r730-01 dvl ~] % sudo service jail start pg02     
Starting jails: pg02.

And logging in, it looks right:

[18:44 pg02 dvl ~] % ls -l /var/db/postgres
total 9
drwx------  19 postgres postgres 26 2026.02.13 18:23 data16/

Work not shown here

I’m not showing how to obtain the packages for the old binaries.

The host contains the old and new packages (not necessarily installed; I refer there to the .pkg files).

The host has already been updated to PostgreSQL 18 (the destination) from PostgreSQL 16. The initdb has not been done yet.

Saving the old binaries

My goal is to make this process data driven: Just update the vars and go.

In this section, I extract the old packages into the OLDBIN directory.

[20:26 pg02 dvl ~/tmp] % OLDBIN=~/tmp/pg-upgrade
[20:26 pg02 dvl ~/tmp] % mkdir $OLDBIN
[20:26 pg02 dvl ~/tmp] % OLD_POSTGRES_VERSION=16
[20:26 pg02 dvl ~/tmp] % NEW_POSTGRES_VERSION=18
[20:26 pg02 dvl ~/tmp] % OLDPKG_S=postgresql16-server-16.12.pkg
[20:26 pg02 dvl ~/tmp] % OLDPKG_C=postgresql16-contrib-16.12_1.pkg
[20:27 pg02 dvl /var/db/pkg] % cd /var/cache/pkg
[20:27 pg02 dvl /var/cache/pkg] % tar xf $OLDPKG_S -C $OLDBIN
tar: Removing leading '/' from member names
[20:27 pg02 dvl /var/cache/pkg] % tar xf $OLDPKG_C -C $OLDBIN
tar: Removing leading '/' from member names
[20:27 pg02 dvl /var/cache/pkg] % cd $OLDBIN
[20:27 pg02 dvl ~/tmp/pg-upgrade] % usr/local/bin/pg_upgrade -V
pg_upgrade (PostgreSQL) 16.12
[20:27 pg02 dvl ~/tmp/pg-upgrade] % 

initdb

This section does the initdb, creating the PostgreSQL 18 cluster.

[20:15 pg02 dvl ~] % ls -l /var/db/postgres 
total 9
drwx------  19 postgres postgres 26 2026.02.13 18:23 data16/
[20:15 pg02 dvl ~] % sudo service postgresql oneinitdb
initdb postgresql
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.

The database cluster will be initialized with this locale configuration:
  locale provider:   libc
  LC_COLLATE:  C
  LC_CTYPE:    C.UTF-8
  LC_MESSAGES: C.UTF-8
  LC_MONETARY: C.UTF-8
  LC_NUMERIC:  C.UTF-8
  LC_TIME:     C.UTF-8
The default text search configuration will be set to "english".

Data page checksums are enabled.

creating directory /var/db/postgres/data18 ... ok
creating subdirectories ... ok
selecting dynamic shared memory implementation ... posix
selecting default "max_connections" ... 100
selecting default "shared_buffers" ... 128MB
selecting default time zone ... UTC
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok
syncing data to disk ... ok

initdb: warning: enabling "trust" authentication for local connections
initdb: hint: You can change this by editing pg_hba.conf or using the option -A, or --auth-local and --auth-host, the next time you run initdb.

Success. You can now start the database server using:

    /usr/local/bin/pg_ctl -D /var/db/postgres/data18 -l logfile start

[20:15 pg02 dvl ~] % ls -l /var/db/postgres           
total 17
drwx------  19 postgres postgres 26 2026.02.13 18:23 data16/
drwx------  19 postgres postgres 24 2026.02.21 20:15 data18/

Shown above, the old and new data directories.

Not shown here

What’s not shown next is making sure the new configuration is what you want (i.e. postgresql.conf for example)

The Upgrade

With everything now in place, I become root in the pg02 jail.

[root@pg02 ~]# su -l postgres

This part is formatted for easy copy/paste:

OLDBIN=/usr/home/dvl/tmp/pg-upgrade
OLD_POSTGRES_VERSION=16
NEW_POSTGRES_VERSION=18
pg_upgrade -b ${OLDBIN}/usr/local/bin/ -d /var/db/postgres/data${OLD_POSTGRES_VERSION}/ \
 -B /usr/local/bin/ -D /var/db/postgres/data${NEW_POSTGRES_VERSION}/ -U postgres

The first time I ran this, I got:

$ pg_upgrade -b ${OLDBIN}/usr/local/bin/ -d /var/db/postgres/data${OLD_POSTGRES_VERSION}/ -B /usr/local/bin/ -D /var/db/postgres/data${NEW_POSTGRES_VERSION}/ -U postgres
Performing Consistency Checks
-----------------------------
Checking cluster versions                                     ok

old cluster does not use data checksums but the new one does
Failure, exiting

This next step is a fun trial, however, since this host is running ZFS, I’m convinced checksums at the application level make no sense if the filesystem is already doing it. Tangent: I think this postgresql_initdb_flags=”–encoding=utf-8 –lc-collate=C –no-data-checksums” added to /etc/rc.conf will suffice (based on the rc.d script and initdb). That will be tested in my next post.

At this point, I should have done another initdb, disabling checksums.

But. I. Did. Not.

OK, let’s try this (re pg_checksums), because I’ve never done it before.

$ ${OLDBIN}/usr/local/bin/pg_checksums -e -D /var/db/postgres/data${OLD_POSTGRES_VERSION}/
Checksum operation completed
Files scanned:   8344
Blocks scanned:  69123852
Files written:  7092
Blocks written: 69122772
pg_checksums: syncing data directory
pg_checksums: updating control file
Checksums enabled in cluster

Good. Now on to the main show. Notice lines 29-30.

$ ${OLDBIN}/usr/local/bin/pg_checksums -e -D /var/db/postgres/data${OLD_POSTGRES_VERSION}/
Checksum operation completed
Files scanned:   8344
Blocks scanned:  69123852
Files written:  7092
Blocks written: 69122772
pg_checksums: syncing data directory
pg_checksums: updating control file
Checksums enabled in cluster
$ time pg_upgrade -b ${OLDBIN}/usr/local/bin/ -d /var/db/postgres/data${OLD_POSTGRES_VERSION}/ \
 -B /usr/local/bin/ -D /var/db/postgres/data${NEW_POSTGRES_VERSION}/ -U postgres
Performing Consistency Checks
-----------------------------
Checking cluster versions                                     ok
Checking database connection settings                         ok
Checking database user is the install user                    ok
Checking for prepared transactions                            ok
Checking for contrib/isn with bigint-passing mismatch         ok
Checking data type usage                                      ok
Checking for not-null constraint inconsistencies              ok
Creating dump of global objects                               ok
Creating dump of database schemas                             
                                                              ok
Checking for presence of required libraries                   ok
Checking database user is the install user                    ok
Checking for prepared transactions                            ok
Checking for new cluster tablespace directories               ok

If pg_upgrade fails after this point, you must re-initdb the
new cluster before continuing.

Performing Upgrade
------------------
Setting locale and encoding for new cluster                   ok
Analyzing all rows in the new cluster                         ok
Freezing all rows in the new cluster                          ok
Deleting files from new pg_xact                               ok
Copying old pg_xact to new server                             ok
Setting oldest XID for new cluster                            ok
Setting next transaction ID and epoch for new cluster         ok
Deleting files from new pg_multixact/offsets                  ok
Copying old pg_multixact/offsets to new server                ok
Deleting files from new pg_multixact/members                  ok
Copying old pg_multixact/members to new server                ok
Setting next multixact ID and offset for new cluster          ok
Resetting WAL archives                                        ok
Setting frozenxid and minmxid counters in new cluster         ok
Restoring global objects in the new cluster                   ok
Restoring database schemas in the new cluster                 
                                                              ok
Copying user relation files                                   
                                                              ok
Setting next OID for new cluster                              ok
Sync data directory to disk                                   ok
Creating script to delete old cluster                         ok
Checking for extension updates                                notice

Your installation contains extensions that should be updated
with the ALTER EXTENSION command.  The file
    update_extensions.sql
when executed by psql by the database superuser will update
these extensions.

Upgrade Complete
----------------
Some statistics are not transferred by pg_upgrade.
Once you start the new server, consider running these two commands:
    /usr/local/bin/vacuumdb -U postgres --all --analyze-in-stages --missing-stats-only
    /usr/local/bin/vacuumdb -U postgres --all --analyze-only
Running this script will delete the old cluster's data files:
    ./delete_old_cluster.sh
     6199.21 real         2.83 user      1289.06 sys
$ 

That’s about 93 minutes. Not bad for the dataset size (78.5G) and given the host is writing and reading to the same ZFS dataset.

I’ll do those above recommended actions two sections down.

Dataset size

Here’s a list of snapshots taken on that new dataset. It’s not surprising that the size in increases as the new data arrives.

[0:42 r730-01 dvl ~] % zfs list -r -t snapshot data03/pg02/postgres              
NAME                                                           USED  AVAIL  REFER  MOUNTPOINT
data03/pg02/postgres@autosnap_2026-02-21_18:45:08_daily          0B      -   448G  -
data03/pg02/postgres@autosnap_2026-02-21_18:45:08_hourly         0B      -   448G  -
data03/pg02/postgres@autosnap_2026-02-21_19:00:06_daily          0B      -   448G  -
data03/pg02/postgres@autosnap_2026-02-21_19:00:06_hourly         0B      -   448G  -
data03/pg02/postgres@autosnap_2026-02-21_20:00:05_hourly         0B      -   448G  -
data03/pg02/postgres@autosnap_2026-02-21_21:01:53_hourly       944K      -   419G  -
data03/pg02/postgres@autosnap_2026-02-21_22:01:40_hourly       936K      -   367G  -
data03/pg02/postgres@autosnap_2026-02-21_23:00:01_hourly      1.20M      -   355G  -
data03/pg02/postgres@autosnap_2026-02-22_00:01:05_daily        184K      -   389G  -
data03/pg02/postgres@autosnap_2026-02-22_00:01:05_hourly       176K      -   389G  -
data03/pg02/postgres@autosnap_2026-02-22_01:03:33_hourly       856K      -   602G  -
data03/pg02/postgres@autosnap_2026-02-22_02:00:08_hourly         0B      -   709G  -
data03/pg02/postgres@autosnap_2026-02-22_03:00:00_hourly         0B      -   709G  -
data03/pg02/postgres@autosnap_2026-02-22_04:00:01_hourly         0B      -   709G  -
data03/pg02/postgres@autosnap_2026-02-22_05:00:05_hourly         0B      -   709G  -
data03/pg02/postgres@autosnap_2026-02-22_06:00:02_hourly         0B      -   709G  -
data03/pg02/postgres@autosnap_2026-02-22_07:00:05_hourly         0B      -   709G  -
data03/pg02/postgres@autosnap_2026-02-22_08:00:07_hourly         0B      -   709G  -
data03/pg02/postgres@autosnap_2026-02-22_09:00:04_hourly         0B      -   709G  -
data03/pg02/postgres@autosnap_2026-02-22_10:00:05_hourly         0B      -   709G  -
data03/pg02/postgres@autosnap_2026-02-22_11:00:03_hourly         0B      -   709G  -
data03/pg02/postgres@autosnap_2026-02-22_12:00:04_hourly         0B      -   709G  -
data03/pg02/postgres@autosnap_2026-02-22_13:00:01_hourly         0B      -   709G  -
data03/pg02/postgres@autosnap_2026-02-22_13:30:00_frequently     0B      -   709G  -
data03/pg02/postgres@autosnap_2026-02-22_13:45:08_frequently     0B      -   709G  -
data03/pg02/postgres@autosnap_2026-02-22_14:00:06_hourly         0B      -   709G  -
data03/pg02/postgres@autosnap_2026-02-22_14:00:06_frequently     0B      -   709G  -
data03/pg02/postgres@autosnap_2026-02-22_14:15:09_frequently     0B      -   709G  -
data03/pg02/postgres@autosnap_2026-02-22_14:30:01_frequently     0B      -   709G  -
data03/pg02/postgres@autosnap_2026-02-22_14:45:11_frequently     0B      -   709G  -
data03/pg02/postgres@autosnap_2026-02-22_15:00:06_hourly         0B      -   709G  -
data03/pg02/postgres@autosnap_2026-02-22_15:00:06_frequently     0B      -   709G  -

Recommended actions

In this section, I run the commands suggested by the pg_update output.

[root@pg02 ~]# service postgresql start
start postgresql
[root@pg02 ~]# /usr/local/bin/vacuumdb -U postgres --all --analyze-in-stages --missing-stats-only
vacuumdb: processing database "bacula": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "empty": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "fpphorum": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "freebsddiary.org": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "freshports.dev": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "freshports.dvl": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "freshports.stage": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "freshports.test": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "gitea": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "nagiostest": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "postgres": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "samdrucker": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "template1": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "bacula": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "empty": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "fpphorum": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "freebsddiary.org": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "freshports.dev": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "freshports.dvl": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "freshports.stage": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "freshports.test": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "gitea": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "nagiostest": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "postgres": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "samdrucker": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "template1": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "bacula": Generating default (full) optimizer statistics
vacuumdb: processing database "empty": Generating default (full) optimizer statistics
vacuumdb: processing database "fpphorum": Generating default (full) optimizer statistics
vacuumdb: processing database "freebsddiary.org": Generating default (full) optimizer statistics
vacuumdb: processing database "freshports.dev": Generating default (full) optimizer statistics
vacuumdb: processing database "freshports.dvl": Generating default (full) optimizer statistics
vacuumdb: processing database "freshports.stage": Generating default (full) optimizer statistics
vacuumdb: processing database "freshports.test": Generating default (full) optimizer statistics
vacuumdb: processing database "gitea": Generating default (full) optimizer statistics
vacuumdb: processing database "nagiostest": Generating default (full) optimizer statistics
vacuumdb: processing database "postgres": Generating default (full) optimizer statistics
vacuumdb: processing database "samdrucker": Generating default (full) optimizer statistics
vacuumdb: processing database "template1": Generating default (full) optimizer statistics

[root@pg02 ~]# /usr/local/bin/vacuumdb -U postgres --all --analyze-only
vacuumdb: vacuuming database "bacula"
vacuumdb: vacuuming database "empty"
vacuumdb: vacuuming database "fpphorum"
vacuumdb: vacuuming database "freebsddiary.org"
vacuumdb: vacuuming database "freshports.dev"
vacuumdb: vacuuming database "freshports.dvl"
vacuumdb: vacuuming database "freshports.stage"
vacuumdb: vacuuming database "freshports.test"
vacuumdb: vacuuming database "gitea"
vacuumdb: vacuuming database "nagiostest"
vacuumdb: vacuuming database "postgres"
vacuumdb: vacuuming database "samdrucker"
vacuumdb: vacuuming database "template1"
[root@pg02 ~]# 

[root@pg02 ~]# sudo su -l postgres
$ ls -l
total 18
drwx------  19 postgres postgres  25 Feb 21 23:51 data16
drwx------  20 postgres postgres  27 Feb 22 15:28 data18
-rwx------   1 postgres postgres  44 Feb 22 01:34 delete_old_cluster.sh
-rw-------   1 postgres postgres 247 Feb 22 01:35 update_extensions.sql
$ ./delete_old_cluster.sh
$ ls -l
total 10
drwx------  20 postgres postgres  27 Feb 22 15:28 data18
-rwx------   1 postgres postgres  44 Feb 22 01:34 delete_old_cluster.sh
-rw-------   1 postgres postgres 247 Feb 22 01:35 update_extensions.sql
$ 

And more snapshots

After the above processing, the newest snapshots look like this.

data03/pg02/postgres@autosnap_2026-02-22_15:15:09_frequently     0B      -   709G  -
data03/pg02/postgres@autosnap_2026-02-22_15:30:01_frequently  82.4M      -   709G  -
data03/pg02/postgres@autosnap_2026-02-22_15:45:11_frequently   761M      -   355G  -

Which makes sense. Those numbers represent deleted data.

What’s next

I declare a decent first attempt. I’m going to try this approach one more time, and if all goes well, target the main server directly instead of taking a copy.

Top

Native FreeBSD Kerberos/LDAP with FreeIPA/IDM

Post by Vermaden via 𝚟𝚎𝚛𝚖𝚊𝚍𝚎𝚗 »

I want to make this clear in the first sentence because its biggest chance that people will read it – this article is entirely based on work done by Christian Hofstede-Kuhn (Larvitz) that wrote Integrating FreeBSD 15 with FreeIPA: Native Kerberos and LDAP Authentication recently. Credit goes to him. Besides that I like to share everything that could be useful – I also treat my blog as a place where I keep and maintain my FreeBSD documentation … and I have seen many blogs and sources of knowledge disappear from the Internet over time … and as I use free WordPress tear I am sure this blog (and knowledge) should be here long after I am gone.

So as You see there are several motivations for this:

  • Keep and maintain personal version with more code snippets that I can copy/paste fast.
  • More detailed commands and outputs.
  • Some additional improvements that may be useful – like local console login.

I just hope Christian will not be mad at me for this 🙂

… and I will directly notify him about this article.

Alternatively – if You need to setup MIT Kerberos server on FreeBSD then use this FreeBSD MIT Kerberos Server article instead.

First of all – this new method is possible to work because FreeBSD switched from Heimdal Kerberos implementation to MIT Kerberos in FreeBSD 15.0-RELEASE … and I am really glad that FreeBSD finally did it.

As You know I already messed with that topic several times in the past:

All of these previous attempts had many downsides:

  • You needed to (re)compile multiple custom packages from FreeBSD Ports.
  • Sometimes it was needed to use custom code by Mariusz Zaborski (oshogbo) for example.
  • Complex sssd(8) daemon with many deps/reqs including D-Bus or Python and more.
  • Setup was complicated/fragile and prune to errors – especially during upgrades.

This new way is using MIT Kerberos from FreeBSD 15.0-RELEASE and small lightweight nslcd(8) daemon from net/nss-pam-ldapd package. The only (non technical) downside is that it uses LGPL21/LGPL3 license … but as we connect to entire Linux domain with FreeIPA/IDM it does not matter much, does it? :)Now – we first need FreeIPA/IDM server … use instructions from older Connect FreeBSD 14.0-STABLE to FreeIPA/IDM article.Now for the new way … lets start by switching the pkg(8) repository from quarterly to latest.

FreeBSD # mkdir -p /usr/local/etc/pkg/repos

FreeBSD # sed s/quarterly/latest/g /etc/pkg/FreeBSD.conf > /usr/local/etc/pkg/repos/FreeBSD.conf

Next we will install needed packages.

FreeBSD # pkg install -y nss-pam-ldapd pam_mkhomedir sudo doas

If your DNS configured at /etc/resolv.conf does not resolve FreeIPA/IDM use /etc/hosts instead.

FreeBSD # cat << __EOF >> /etc/hosts
172.27.33.200  rhidm.lab.org   rhidm
172.27.33.215  fbsd15.lab.org  fbsd15
__EOF

Add our new FreeBSD host and its IP on FreeIPA/IDM server.

[root@idm ~]# kinit admin
Password for admin@LAB.ORG: 

[root@idm ~]# ipa dnsrecord-add lab.org fbsd15 --a-rec=172.27.33.215 --a-create-reverse
  Record name: fbsd15
  A record: 172.27.33.215

[root@idm ~]# ipa host-add fbsd15.lab.org
---------------------------
Added host "fbsd15.lab.org"
---------------------------
  Host name: fbsd15.lab.org
  Principal name: host/fbsd15.lab.org@LAB.ORG
  Principal alias: host/fbsd15.lab.org@LAB.ORG
  Password: False
  Keytab: False
  Managed by: fbsd15.lab.org

[root@idm ~]# ipa-getkeytab -s rhidm.lab.org -p host/fbsd15.lab.org@LAB.ORG -k /root/fbsd15.keytab
Keytab successfully retrieved and stored in: /root/fbsd15.keytab

[root@idm ~]# scp /root/fbsd15.keytab fbsd15:

On FreeBSD host copy the keytab from FreeIPA/IDM server and put it into right place with proper permissions.

FreeBSD # cp /root/fbsd15.keytab /etc/krb5.keytab

FreeBSD # chmod 640 /etc/krb5.keytab

Verify FreeBSD keytab.

FreeBSD # klist -k
Keytab name: FILE:/etc/krb5.keytab
KVNO Principal
---- --------------------------------------------------------------------------
   1 host/fbsd15.lab.org@LAB.ORG
   1 host/fbsd15.lab.org@LAB.ORG
   1 host/fbsd15.lab.org@LAB.ORG
   1 host/fbsd15.lab.org@LAB.ORG

The nslcd(8) daemon will need /etc/krb5.keytab keytab read access to work – to achieve that we will add sshd user to its nslcd group.

FreeBSD # groups sshd
sshd

FreeBSD # pw groupmod nslcd -m sshd

FreeBSD # groups sshd
sshd nslcd

Prepare /etc/krb5.conf config.

FreeBSD # cat << __EOF > /etc/krb5.conf
[libdefaults]
  default_realm    = LAB.ORG
  dns_lookup_kdc   = false
  dns_lookup_realm = false

[realms]
  LAB.ORG = {
    kdc          = rhidm.lab.org
    admin_server = rhidm.lab.org
  }

[domain_realm]
  .lab.org = LAB.ORG
   lab.org = LAB.ORG
__EOF

Create /usr/local/etc/nslcd.conf config for nslcd(8) daemon.

FreeBSD # cat << __EOF > /usr/local/etc/nslcd.conf
# RUN AS nslcd USER 
uid nslcd
gid nslcd

# LDAP CONNECTION DETAILS
uri  ldap://rhidm.lab.org
base dc=lab,dc=org

# USE SYSTEM KEYTAB FOR AUTH
sasl_mech  GSSAPI
sasl_realm LAB.ORG

# FORCE /bin/sh SHELL
map passwd loginShell "/bin/sh"
__EOF

Enable and start the nslcd(8) daemon.

FreeBSD # service nslcd enable
nslcd enabled in /etc/rc.conf

FreeBSD # service nslcd start
Starting nslcd.

Modify /etc/nsswitch.conf config the following way with simple sed(1) one liner.

FreeBSD # sed -i '.OLD' -E \
              -e 's/^group:.*/group: files ldap/g'   \
              -e 's/^passwd:.*/passwd: files ldap/g' \
              /etc/nsswitch.conf

This is what we changed.

FreeBSD # diff -u /etc/nsswitch.conf.OLD /etc/nsswitch.conf
--- /etc/nsswitch.conf.OLD 2026-02-18 04:54:41.487608000 +0000
+++ /etc/nsswitch.conf     2026-02-18 04:59:00.234662000 +0000
@@ -1,9 +1,9 @@
-group: compat
+group: files ldap
 group_compat: nis
 hosts: files dns
 netgroup: compat
 networks: files
-passwd: compat
+passwd: files ldap
 passwd_compat: nis
 shells: files
 services: compat

One can use even more compact /etc/nsswitch.conf as shown by Christian Hofstede-Kuhn (Larvitz) below.

FreeBSD # cat << __EOF > /etc/nsswitch.conf
group: files ldap
passwd: files ldap
hosts: files dns
networks: files
shells: files
services: compat
protocols: files
rpc: files
__EOF

Now lets test how it works.

FreeBSD # id vermaden
uid=854800003(vermaden) gid=854800003(vermaden) groups=854800003(vermaden)

Now the sshd(8) part.

FreeBSD # cat << __EOF >> /etc/ssh/sshd_config
# KRB5/GSSAPI AUTH
GSSAPIAuthentication      yes
GSSAPICleanupCredentials  yes
GSSAPIStrictAcceptorCheck no
__EOF

Time to restart sshd(8) daemon.

FreeBSD # service sshd restart
Performing sanity check on sshd configuration.
Stopping sshd.
Waiting for PIDS: 1089.
Performing sanity check on sshd configuration.
Starting sshd.

Now lets test how it works over SSH.

[root@rhidm ~]# kinit vermaden
Password for vermaden@LAB.ORG: 

[root@rhidm ~]# ssh fbsd15 -l vermaden
FreeBSD 15.0-RELEASE-p2 (GENERIC) releng/15.0-n281005-5fb0f8e9e61d

Welcome to FreeBSD!

Release Notes, Errata: https://www.FreeBSD.org/releases/
Security Advisories:   https://www.FreeBSD.org/security/
FreeBSD Handbook:      https://www.FreeBSD.org/handbook/
FreeBSD FAQ:           https://www.FreeBSD.org/faq/
Questions List:        https://www.FreeBSD.org/lists/questions/
FreeBSD Forums:        https://forums.FreeBSD.org/

Documents installed with the system are in the /usr/local/share/doc/freebsd/
directory, or can be installed later with:  pkg install en-freebsd-doc
For other languages, replace "en" with a language code like de or fr.

Show the version of FreeBSD installed:  freebsd-version ; uname -a
Please include that output and any error messages when posting questions.
Introduction to manual pages:  man man
FreeBSD directory layout:      man hier

To change this login announcement, see motd(5).

Could not chdir to home directory /home/vermaden: No such file or directory

vermaden@fbsd15:/ $ id admin
uid=854800000(admin) gid=854800000(admins) groups=854800000(admins)

vermaden@fbsd15:/ $ id
uid=854800003(vermaden) gid=854800003(vermaden) groups=0(wheel),854800003(vermaden)

Works but … the ${HOME} directory is not automatically created because we did not configured it yet.

Lets use sed(1) again … and yes it has to be spread over two lines.

FreeBSD # sed -i '.OLD' '/^session.*/i\
session optional pam_mkhomedir.so mode=0700' /etc/pam.d/sshd

FreeBSD # ls -l /etc/pam.d/sshd*
-rw-r--r--  1 root wheel 608 Feb 18 05:18 /etc/pam.d/sshd
-rw-r--r--  1 root wheel 564 Feb 18 05:18 /etc/pam.d/sshd.OLD

FreeBSD # diff -u /etc/pam.d/sshd.OLD /etc/pam.d/sshd
--- /etc/pam.d/sshd.OLD 2026-02-18 05:20:50.344139000 +0000
+++ /etc/pam.d/sshd     2026-02-18 05:20:53.552277000 +0000
@@ -16,6 +16,7 @@
 
 # session
 #session       optional        pam_ssh.so              want_agent
+session        optional        pam_mkhomedir.so        mode=0700
 session        required        pam_permit.so
 
 # password

We use optional instead of required if for some reason pam_mkhomedir.so fails or is not available.

For the record the entire /etc/pam.d/sshd PAM config looks like that.

FreeBSD # cat /etc/pam.d/sshd
# auth
auth            required        pam_unix.so             no_warn try_first_pass

# account
account         required        pam_nologin.so
account         required        pam_login_access.so
account         required        pam_unix.so

# session
session         optional        pam_mkhomedir.so        mode=0700
session         required        pam_permit.so

# password
password        required        pam_unix.so             no_warn try_first_pass

We will now configure sudo(8) for more permissions.

FreeBSD # pw groupmod wheel -m vermaden

FreeBSD # cat << __EOF >> /usr/local/etc/sudoers
%wheel ALL=(ALL:ALL) NOPASSWD: ALL
__EOF

We will also do doas(1) here as its simpler and more secure.

FreeBSD # cat << __EOF > /usr/local/etc/doas.conf
permit nopass keepenv root   as root
permit nopass keepenv :wheel as root
__EOF

Now lets try to login again.

[root@rhidm ~]# kinit vermaden
Password for vermaden@LAB.ORG: 

[root@rhidm ~]# ssh fbsd15 -l vermaden

vermaden@fbsd15:~ $ pwd
/home/vermaden

vermaden@fbsd15:~ $ sudo -i
root@fbsd15:~ #

Better.

I also ‘silenced’ the login a little by creating empty ~/.hushlogin file and by removing /usr/bin/fortune from the ~/.profile file.

vermaden@fbsd15~/ $ :> ~/.hushlogin

vermaden@fbsd15:~ $ sed -i '.OLD' '/fortune/d' ~/.profile

… and this is the part I added – using FreeIPA/IDM user for console access – because right now – it does not work.

FreeBSD/amd64 (fbsd15.lab.org) (ttyu0)

login: vermaden
Password:
Login incorrect

To allow that we will uncomment all lines matching the pam_krb5.so module within /etc/pam.d/system config.

FreeBSD # sed -i '.OLD' '/pam_krb5.so/s/^#//g' /etc/pam.d/system

FreeBSD # ls -l /etc/pam.d/system*
-rw-r--r--  1 root wheel 568 Feb 18 05:34 /etc/pam.d/system
-rw-r--r--  1 root wheel 571 Feb 18 05:33 /etc/pam.d/system.OLD

FreeBSD # diff -u /etc/pam.d/system.OLD /etc/pam.d/system
--- /etc/pam.d/system.OLD 2026-02-18 05:33:48.171585000 +0000
+++ /etc/pam.d/system     2026-02-18 05:34:24.444767000 +0000
@@ -4,12 +4,12 @@
 #
 
 # auth
-#auth          sufficient      pam_krb5.so             no_warn try_first_pass
+auth           sufficient      pam_krb5.so             no_warn try_first_pass
 #auth          sufficient      pam_ssh.so              no_warn try_first_pass
 auth           required        pam_unix.so             no_warn try_first_pass nullok
 
 # account
-#account       required        pam_krb5.so
+account        required        pam_krb5.so
 account                required        pam_login_access.so
 account                required        pam_unix.so
 
@@ -19,5 +19,5 @@
 session         required        pam_xdg.so
 
 # password
-#password      sufficient      pam_krb5.so             no_warn try_first_pass
+password       sufficient      pam_krb5.so             no_warn try_first_pass
 password       required        pam_unix.so             no_warn try_first_pass

Lets try again.

FreeBSD/amd64 (fbsd15.lab.org) (ttyu0)

login: vermaden
Password: 

vermaden@fbsd15:~ $ klist
klist: No credentials cache found (filename: /tmp/krb5cc_854800003_AeF9er)

vermaden@fbsd15:~ $ kinit 
Password for vermaden@LAB.ORG: 

vermaden@fbsd15:~ $ klist
Ticket cache: FILE:/tmp/krb5cc_854800003_AeF9er
Default principal: vermaden@LAB.ORG

Valid starting     Expires            Service principal
02/18/26 05:27:47  02/19/26 05:07:21  krbtgt/LAB.ORG@LAB.ORG

You have reached the end of this article – see you in the next one 🙂

EOF
Top

October-December 2025 Status Report

Post by FreeBSD Newsflash via FreeBSD News Flash »

The October to December Status Report is now available with 28 entries.
Top

FreeBSD 14.4-BETA3 Available

Post by FreeBSD Newsflash via FreeBSD News Flash »

The third BETA build for the FreeBSD 14.4 release cycle is now available. ISO images for the amd64, i386, powerpc, powerpc64, powerpc64le, powerpcspe, armv7, aarch64, and riscv64 architectures are FreeBSD mirror sites.
Top

Cleaning up old snapshost: snapshot has dependent clones

Post by Dan Langille via Dan Langille's Other Diary »

I was getting messages like this from sanoid:

cannot destroy 'data02/jails/mysql02.bad@autosnap_2026-02-10_12:00:07_hourly': snapshot has dependent clones
use '-R' to destroy the following datasets:
data02/jails/mysql02@mkjail-202602102333
data02/jails/mysql02@autosnap_2026-02-11_00:00:06_daily
data02/jails/mysql02@autosnap_2026-02-12_00:00:09_daily
data02/jails/mysql02@autosnap_2026-02-13_00:00:01_daily
data02/jails/mysql02@autosnap_2026-02-14_00:00:11_daily
data02/jails/mysql02@autosnap_2026-02-15_00:00:01_daily
data02/jails/mysql02@autosnap_2026-02-16_00:00:02_daily
data02/jails/mysql02@autosnap_2026-02-16_18:00:02_hourly
data02/jails/mysql02@autosnap_2026-02-16_19:00:30_hourly
data02/jails/mysql02@autosnap_2026-02-16_20:00:02_hourly
data02/jails/mysql02@autosnap_2026-02-16_20:00:02_frequently
data02/jails/mysql02@autosnap_2026-02-16_20:15:08_frequently
data02/jails/mysql02@autosnap_2026-02-16_20:30:02_frequently
data02/jails/mysql02@autosnap_2026-02-16_20:45:10_frequently
data02/jails/mysql02@autosnap_2026-02-16_21:00:01_hourly
data02/jails/mysql02@autosnap_2026-02-16_21:00:01_frequently
data02/jails/mysql02@autosnap_2026-02-16_21:15:08_frequently
data02/jails/mysql02@autosnap_2026-02-16_21:30:01_frequently
data02/jails/mysql02@autosnap_2026-02-16_21:45:09_frequently
data02/jails/mysql02@autosnap_2026-02-16_22:00:03_hourly
data02/jails/mysql02@autosnap_2026-02-16_22:00:03_frequently
data02/jails/mysql02@autosnap_2026-02-16_22:15:08_frequently
data02/jails/mysql02@autosnap_2026-02-16_22:30:01_frequently
data02/jails/mysql02@autosnap_2026-02-16_22:45:09_frequently
data02/jails/mysql02@autosnap_2026-02-16_23:00:02_hourly
data02/jails/mysql02@autosnap_2026-02-16_23:00:02_frequently
data02/jails/mysql02@autosnap_2026-02-16_23:15:08_frequently
data02/jails/mysql02@autosnap_2026-02-16_23:30:01_frequently
data02/jails/mysql02@autosnap_2026-02-16_23:45:09_frequently
data02/jails/mysql02@autosnap_2026-02-17_00:00:00_daily
data02/jails/mysql02@autosnap_2026-02-17_00:00:00_hourly
data02/jails/mysql02@autosnap_2026-02-17_00:00:00_frequently
data02/jails/mysql02@autosnap_2026-02-17_00:15:10_frequently
data02/jails/mysql02@autosnap_2026-02-17_00:30:00_frequently
data02/jails/mysql02@autosnap_2026-02-17_00:45:10_frequently
data02/jails/mysql02
could not remove data02/jails/mysql02.bad@autosnap_2026-02-10_12:00:07_hourly : 256 at /usr/local/bin/sanoid line 413.
cannot destroy 'data02/jails/mysql02.bad@autosnap_2026-02-10_13:00:15_hourly': snapshot has dependent clones
use '-R' to destroy the following datasets:
data02/jails/mysql02.bad.part3@autosnap_2026-02-11_00:00:06_daily
data02/jails/mysql02.bad.part3@autosnap_2026-02-12_00:00:09_daily
data02/jails/mysql02.bad.part3@autosnap_2026-02-13_00:00:01_daily
data02/jails/mysql02.bad.part3@autosnap_2026-02-14_00:00:11_daily
data02/jails/mysql02.bad.part3@autosnap_2026-02-15_00:00:01_daily
data02/jails/mysql02.bad.part3@autosnap_2026-02-16_00:00:02_daily
data02/jails/mysql02.bad.part3@autosnap_2026-02-16_19:00:30_hourly
data02/jails/mysql02.bad.part3@autosnap_2026-02-16_20:00:02_hourly
data02/jails/mysql02.bad.part3@autosnap_2026-02-16_21:00:01_hourly
data02/jails/mysql02.bad.part3@autosnap_2026-02-16_21:00:01_frequently
data02/jails/mysql02.bad.part3@autosnap_2026-02-16_21:15:08_frequently
data02/jails/mysql02.bad.part3@autosnap_2026-02-16_21:30:01_frequently
data02/jails/mysql02.bad.part3@autosnap_2026-02-16_21:45:09_frequently
data02/jails/mysql02.bad.part3@autosnap_2026-02-16_22:00:03_hourly
data02/jails/mysql02.bad.part3@autosnap_2026-02-16_22:00:03_frequently
data02/jails/mysql02.bad.part3@autosnap_2026-02-16_22:15:08_frequently
data02/jails/mysql02.bad.part3@autosnap_2026-02-16_22:30:01_frequently
data02/jails/mysql02.bad.part3@autosnap_2026-02-16_22:45:09_frequently
data02/jails/mysql02.bad.part3@autosnap_2026-02-16_23:00:02_hourly
data02/jails/mysql02.bad.part3@autosnap_2026-02-16_23:00:02_frequently
data02/jails/mysql02.bad.part3@autosnap_2026-02-16_23:15:08_frequently
data02/jails/mysql02.bad.part3@autosnap_2026-02-16_23:30:01_frequently
data02/jails/mysql02.bad.part3@autosnap_2026-02-16_23:45:09_frequently
data02/jails/mysql02.bad.part3@autosnap_2026-02-17_00:00:00_daily
data02/jails/mysql02.bad.part3@autosnap_2026-02-17_00:00:00_hourly
data02/jails/mysql02.bad.part3@autosnap_2026-02-17_00:00:00_frequently
data02/jails/mysql02.bad.part3@autosnap_2026-02-17_00:15:10_frequently
data02/jails/mysql02.bad.part3@autosnap_2026-02-17_00:30:00_frequently
data02/jails/mysql02.bad.part3@autosnap_2026-02-17_00:45:10_frequently
data02/jails/mysql02.bad.part3
could not remove data02/jails/mysql02.bad@autosnap_2026-02-10_13:00:15_hourly : 256 at /usr/local/bin/sanoid line 413.

I was sure it was related to my recent zfs clone when I was I broke my FreeBSD MySQL jail; got it working again by using a snapshot

This is the raw set of commands I ran, presented without commentary.

[1:11 r730-01 dvl ~] % sudo zfs destroy -r data02/jails/mysql02.bad.part3
[1:11 r730-01 dvl ~] % zfs destroy -nrv data02/jails/mysql02.bad.part2
would destroy data02/jails/mysql02.bad.part2@autosnap_2026-02-11_00:00:06_daily
would destroy data02/jails/mysql02.bad.part2@autosnap_2026-02-12_00:00:09_daily
would destroy data02/jails/mysql02.bad.part2@autosnap_2026-02-13_00:00:01_daily
would destroy data02/jails/mysql02.bad.part2@autosnap_2026-02-14_00:00:11_daily
would destroy data02/jails/mysql02.bad.part2@autosnap_2026-02-15_00:00:01_daily
would destroy data02/jails/mysql02.bad.part2@autosnap_2026-02-16_00:00:02_daily
would destroy data02/jails/mysql02.bad.part2@autosnap_2026-02-16_19:00:30_hourly
would destroy data02/jails/mysql02.bad.part2@autosnap_2026-02-16_20:00:02_hourly
would destroy data02/jails/mysql02.bad.part2@autosnap_2026-02-16_21:00:01_hourly
would destroy data02/jails/mysql02.bad.part2@autosnap_2026-02-16_21:00:01_frequently
would destroy data02/jails/mysql02.bad.part2@autosnap_2026-02-16_21:15:08_frequently
would destroy data02/jails/mysql02.bad.part2@autosnap_2026-02-16_21:30:01_frequently
would destroy data02/jails/mysql02.bad.part2@autosnap_2026-02-16_21:45:09_frequently
would destroy data02/jails/mysql02.bad.part2@autosnap_2026-02-16_22:00:03_hourly
would destroy data02/jails/mysql02.bad.part2@autosnap_2026-02-16_22:00:03_frequently
would destroy data02/jails/mysql02.bad.part2@autosnap_2026-02-16_22:15:08_frequently
would destroy data02/jails/mysql02.bad.part2@autosnap_2026-02-16_22:30:01_frequently
would destroy data02/jails/mysql02.bad.part2@autosnap_2026-02-16_22:45:09_frequently
would destroy data02/jails/mysql02.bad.part2@autosnap_2026-02-16_23:00:02_hourly
would destroy data02/jails/mysql02.bad.part2@autosnap_2026-02-16_23:00:02_frequently
would destroy data02/jails/mysql02.bad.part2@autosnap_2026-02-16_23:15:08_frequently
would destroy data02/jails/mysql02.bad.part2@autosnap_2026-02-16_23:30:01_frequently
would destroy data02/jails/mysql02.bad.part2@autosnap_2026-02-16_23:45:09_frequently
would destroy data02/jails/mysql02.bad.part2@autosnap_2026-02-17_00:00:00_daily
would destroy data02/jails/mysql02.bad.part2@autosnap_2026-02-17_00:00:00_hourly
would destroy data02/jails/mysql02.bad.part2@autosnap_2026-02-17_00:00:00_frequently
would destroy data02/jails/mysql02.bad.part2@autosnap_2026-02-17_00:15:10_frequently
would destroy data02/jails/mysql02.bad.part2@autosnap_2026-02-17_00:30:00_frequently
would destroy data02/jails/mysql02.bad.part2@autosnap_2026-02-17_00:45:10_frequently
would destroy data02/jails/mysql02.bad.part2@autosnap_2026-02-17_01:00:02_hourly
would destroy data02/jails/mysql02.bad.part2@autosnap_2026-02-17_01:00:02_frequently
would destroy data02/jails/mysql02.bad.part2
[1:11 r730-01 dvl ~] % sudo zfs destroy -rv data02/jails/mysql02.bad.part2 
will destroy data02/jails/mysql02.bad.part2@autosnap_2026-02-11_00:00:06_daily
will destroy data02/jails/mysql02.bad.part2@autosnap_2026-02-12_00:00:09_daily
will destroy data02/jails/mysql02.bad.part2@autosnap_2026-02-13_00:00:01_daily
will destroy data02/jails/mysql02.bad.part2@autosnap_2026-02-14_00:00:11_daily
will destroy data02/jails/mysql02.bad.part2@autosnap_2026-02-15_00:00:01_daily
will destroy data02/jails/mysql02.bad.part2@autosnap_2026-02-16_00:00:02_daily
will destroy data02/jails/mysql02.bad.part2@autosnap_2026-02-16_19:00:30_hourly
will destroy data02/jails/mysql02.bad.part2@autosnap_2026-02-16_20:00:02_hourly
will destroy data02/jails/mysql02.bad.part2@autosnap_2026-02-16_21:00:01_hourly
will destroy data02/jails/mysql02.bad.part2@autosnap_2026-02-16_21:00:01_frequently
will destroy data02/jails/mysql02.bad.part2@autosnap_2026-02-16_21:15:08_frequently
will destroy data02/jails/mysql02.bad.part2@autosnap_2026-02-16_21:30:01_frequently
will destroy data02/jails/mysql02.bad.part2@autosnap_2026-02-16_21:45:09_frequently
will destroy data02/jails/mysql02.bad.part2@autosnap_2026-02-16_22:00:03_hourly
will destroy data02/jails/mysql02.bad.part2@autosnap_2026-02-16_22:00:03_frequently
will destroy data02/jails/mysql02.bad.part2@autosnap_2026-02-16_22:15:08_frequently
will destroy data02/jails/mysql02.bad.part2@autosnap_2026-02-16_22:30:01_frequently
will destroy data02/jails/mysql02.bad.part2@autosnap_2026-02-16_22:45:09_frequently
will destroy data02/jails/mysql02.bad.part2@autosnap_2026-02-16_23:00:02_hourly
will destroy data02/jails/mysql02.bad.part2@autosnap_2026-02-16_23:00:02_frequently
will destroy data02/jails/mysql02.bad.part2@autosnap_2026-02-16_23:15:08_frequently
will destroy data02/jails/mysql02.bad.part2@autosnap_2026-02-16_23:30:01_frequently
will destroy data02/jails/mysql02.bad.part2@autosnap_2026-02-16_23:45:09_frequently
will destroy data02/jails/mysql02.bad.part2@autosnap_2026-02-17_00:00:00_daily
will destroy data02/jails/mysql02.bad.part2@autosnap_2026-02-17_00:00:00_hourly
will destroy data02/jails/mysql02.bad.part2@autosnap_2026-02-17_00:00:00_frequently
will destroy data02/jails/mysql02.bad.part2@autosnap_2026-02-17_00:15:10_frequently
will destroy data02/jails/mysql02.bad.part2@autosnap_2026-02-17_00:30:00_frequently
will destroy data02/jails/mysql02.bad.part2@autosnap_2026-02-17_00:45:10_frequently
will destroy data02/jails/mysql02.bad.part2@autosnap_2026-02-17_01:00:02_hourly
will destroy data02/jails/mysql02.bad.part2@autosnap_2026-02-17_01:00:02_frequently
will destroy data02/jails/mysql02.bad.part2
[1:11 r730-01 dvl ~] % zfs list | grep mysql02
data02/jails/mysql02                                                2.48G  22.3G  7.85G  /jails/mysql02
data02/jails/mysql02.bad                                            3.88G  22.3G  7.70G  /jails/mysql02.bad
[1:11 r730-01 dvl ~] % zfs destroy data02/jails/mysql02.bad
cannot destroy 'data02/jails/mysql02.bad': filesystem has children
use '-r' to destroy the following datasets:
data02/jails/mysql02.bad@autosnap_2026-02-10_12:00:07_hourly
data02/jails/mysql02.bad@autosnap_2026-02-10_13:00:15_hourly
data02/jails/mysql02.bad@mkjail-202602101312
data02/jails/mysql02.bad@autosnap_2026-02-11_00:00:06_daily
data02/jails/mysql02.bad@autosnap_2026-02-12_00:00:09_daily
data02/jails/mysql02.bad@autosnap_2026-02-13_00:00:01_daily
data02/jails/mysql02.bad@autosnap_2026-02-14_00:00:11_daily
data02/jails/mysql02.bad@autosnap_2026-02-15_00:00:01_daily
data02/jails/mysql02.bad@autosnap_2026-02-16_00:00:02_daily
data02/jails/mysql02.bad@autosnap_2026-02-16_19:00:30_hourly
data02/jails/mysql02.bad@autosnap_2026-02-16_20:00:02_hourly
data02/jails/mysql02.bad@autosnap_2026-02-16_21:00:01_hourly
data02/jails/mysql02.bad@autosnap_2026-02-16_21:00:01_frequently
data02/jails/mysql02.bad@autosnap_2026-02-16_21:15:08_frequently
data02/jails/mysql02.bad@autosnap_2026-02-16_21:30:01_frequently
data02/jails/mysql02.bad@autosnap_2026-02-16_21:45:09_frequently
data02/jails/mysql02.bad@autosnap_2026-02-16_22:00:03_hourly
data02/jails/mysql02.bad@autosnap_2026-02-16_22:00:03_frequently
data02/jails/mysql02.bad@autosnap_2026-02-16_22:15:08_frequently
data02/jails/mysql02.bad@autosnap_2026-02-16_22:30:01_frequently
data02/jails/mysql02.bad@autosnap_2026-02-16_22:45:09_frequently
data02/jails/mysql02.bad@autosnap_2026-02-16_23:00:02_hourly
data02/jails/mysql02.bad@autosnap_2026-02-16_23:00:02_frequently
data02/jails/mysql02.bad@autosnap_2026-02-16_23:15:08_frequently
data02/jails/mysql02.bad@autosnap_2026-02-16_23:30:01_frequently
data02/jails/mysql02.bad@autosnap_2026-02-16_23:45:09_frequently
data02/jails/mysql02.bad@autosnap_2026-02-17_00:00:00_daily
data02/jails/mysql02.bad@autosnap_2026-02-17_00:00:00_hourly
data02/jails/mysql02.bad@autosnap_2026-02-17_00:00:00_frequently
data02/jails/mysql02.bad@autosnap_2026-02-17_00:15:10_frequently
data02/jails/mysql02.bad@autosnap_2026-02-17_00:30:00_frequently
data02/jails/mysql02.bad@autosnap_2026-02-17_00:45:10_frequently
data02/jails/mysql02.bad@autosnap_2026-02-17_01:00:02_hourly
data02/jails/mysql02.bad@autosnap_2026-02-17_01:00:02_frequently
[1:11 r730-01 dvl ~] % sudo zpool checkpoint data02
[1:12 r730-01 dvl ~] % zfs destroy -nrv data02/jails/mysql02.bad
cannot destroy 'data02/jails/mysql02.bad': filesystem has dependent clones
use '-R' to destroy the following datasets:
data02/jails/mysql02@mkjail-202602102333
data02/jails/mysql02@autosnap_2026-02-11_00:00:06_daily
data02/jails/mysql02@autosnap_2026-02-12_00:00:09_daily
data02/jails/mysql02@autosnap_2026-02-13_00:00:01_daily
data02/jails/mysql02@autosnap_2026-02-14_00:00:11_daily
data02/jails/mysql02@autosnap_2026-02-15_00:00:01_daily
data02/jails/mysql02@autosnap_2026-02-16_00:00:02_daily
data02/jails/mysql02@autosnap_2026-02-16_19:00:30_hourly
data02/jails/mysql02@autosnap_2026-02-16_20:00:02_hourly
data02/jails/mysql02@autosnap_2026-02-16_21:00:01_hourly
data02/jails/mysql02@autosnap_2026-02-16_21:00:01_frequently
data02/jails/mysql02@autosnap_2026-02-16_21:15:08_frequently
data02/jails/mysql02@autosnap_2026-02-16_21:30:01_frequently
data02/jails/mysql02@autosnap_2026-02-16_21:45:09_frequently
data02/jails/mysql02@autosnap_2026-02-16_22:00:03_hourly
data02/jails/mysql02@autosnap_2026-02-16_22:00:03_frequently
data02/jails/mysql02@autosnap_2026-02-16_22:15:08_frequently
data02/jails/mysql02@autosnap_2026-02-16_22:30:01_frequently
data02/jails/mysql02@autosnap_2026-02-16_22:45:09_frequently
data02/jails/mysql02@autosnap_2026-02-16_23:00:02_hourly
data02/jails/mysql02@autosnap_2026-02-16_23:00:02_frequently
data02/jails/mysql02@autosnap_2026-02-16_23:15:08_frequently
data02/jails/mysql02@autosnap_2026-02-16_23:30:01_frequently
data02/jails/mysql02@autosnap_2026-02-16_23:45:09_frequently
data02/jails/mysql02@autosnap_2026-02-17_00:00:00_daily
data02/jails/mysql02@autosnap_2026-02-17_00:00:00_hourly
data02/jails/mysql02@autosnap_2026-02-17_00:00:00_frequently
data02/jails/mysql02@autosnap_2026-02-17_00:15:10_frequently
data02/jails/mysql02@autosnap_2026-02-17_00:30:00_frequently
data02/jails/mysql02@autosnap_2026-02-17_00:45:10_frequently
data02/jails/mysql02@autosnap_2026-02-17_01:00:02_hourly
data02/jails/mysql02@autosnap_2026-02-17_01:00:02_frequently
data02/jails/mysql02
[1:12 r730-01 dvl ~] % sudo zpool checkpoint -d data02
[1:13 r730-01 dvl ~] % sudo zfs promote data02/jails/mysql02
[1:13 r730-01 dvl ~] % zfs destroy -nrv data02/jails/mysql02.bad
would destroy data02/jails/mysql02.bad@autosnap_2026-02-10_13:00:15_hourly
would destroy data02/jails/mysql02.bad@mkjail-202602101312
would destroy data02/jails/mysql02.bad@autosnap_2026-02-11_00:00:06_daily
would destroy data02/jails/mysql02.bad@autosnap_2026-02-12_00:00:09_daily
would destroy data02/jails/mysql02.bad@autosnap_2026-02-13_00:00:01_daily
would destroy data02/jails/mysql02.bad@autosnap_2026-02-14_00:00:11_daily
would destroy data02/jails/mysql02.bad@autosnap_2026-02-15_00:00:01_daily
would destroy data02/jails/mysql02.bad@autosnap_2026-02-16_00:00:02_daily
would destroy data02/jails/mysql02.bad@autosnap_2026-02-16_19:00:30_hourly
would destroy data02/jails/mysql02.bad@autosnap_2026-02-16_20:00:02_hourly
would destroy data02/jails/mysql02.bad@autosnap_2026-02-16_21:00:01_hourly
would destroy data02/jails/mysql02.bad@autosnap_2026-02-16_21:00:01_frequently
would destroy data02/jails/mysql02.bad@autosnap_2026-02-16_21:15:08_frequently
would destroy data02/jails/mysql02.bad@autosnap_2026-02-16_21:30:01_frequently
would destroy data02/jails/mysql02.bad@autosnap_2026-02-16_21:45:09_frequently
would destroy data02/jails/mysql02.bad@autosnap_2026-02-16_22:00:03_hourly
would destroy data02/jails/mysql02.bad@autosnap_2026-02-16_22:00:03_frequently
would destroy data02/jails/mysql02.bad@autosnap_2026-02-16_22:15:08_frequently
would destroy data02/jails/mysql02.bad@autosnap_2026-02-16_22:30:01_frequently
would destroy data02/jails/mysql02.bad@autosnap_2026-02-16_22:45:09_frequently
would destroy data02/jails/mysql02.bad@autosnap_2026-02-16_23:00:02_hourly
would destroy data02/jails/mysql02.bad@autosnap_2026-02-16_23:00:02_frequently
would destroy data02/jails/mysql02.bad@autosnap_2026-02-16_23:15:08_frequently
would destroy data02/jails/mysql02.bad@autosnap_2026-02-16_23:30:01_frequently
would destroy data02/jails/mysql02.bad@autosnap_2026-02-16_23:45:09_frequently
would destroy data02/jails/mysql02.bad@autosnap_2026-02-17_00:00:00_daily
would destroy data02/jails/mysql02.bad@autosnap_2026-02-17_00:00:00_hourly
would destroy data02/jails/mysql02.bad@autosnap_2026-02-17_00:00:00_frequently
would destroy data02/jails/mysql02.bad@autosnap_2026-02-17_00:15:10_frequently
would destroy data02/jails/mysql02.bad@autosnap_2026-02-17_00:30:00_frequently
would destroy data02/jails/mysql02.bad@autosnap_2026-02-17_00:45:10_frequently
would destroy data02/jails/mysql02.bad@autosnap_2026-02-17_01:00:02_hourly
would destroy data02/jails/mysql02.bad@autosnap_2026-02-17_01:00:02_frequently
would destroy data02/jails/mysql02.bad
[1:13 r730-01 dvl ~] % sudo zpool checkpoint data02   
[1:13 r730-01 dvl ~] % sudo zfs destroy -rv data02/jails/mysql02.bad 
will destroy data02/jails/mysql02.bad@autosnap_2026-02-10_13:00:15_hourly
will destroy data02/jails/mysql02.bad@mkjail-202602101312
will destroy data02/jails/mysql02.bad@autosnap_2026-02-11_00:00:06_daily
will destroy data02/jails/mysql02.bad@autosnap_2026-02-12_00:00:09_daily
will destroy data02/jails/mysql02.bad@autosnap_2026-02-13_00:00:01_daily
will destroy data02/jails/mysql02.bad@autosnap_2026-02-14_00:00:11_daily
will destroy data02/jails/mysql02.bad@autosnap_2026-02-15_00:00:01_daily
will destroy data02/jails/mysql02.bad@autosnap_2026-02-16_00:00:02_daily
will destroy data02/jails/mysql02.bad@autosnap_2026-02-16_19:00:30_hourly
will destroy data02/jails/mysql02.bad@autosnap_2026-02-16_20:00:02_hourly
will destroy data02/jails/mysql02.bad@autosnap_2026-02-16_21:00:01_hourly
will destroy data02/jails/mysql02.bad@autosnap_2026-02-16_21:00:01_frequently
will destroy data02/jails/mysql02.bad@autosnap_2026-02-16_21:15:08_frequently
will destroy data02/jails/mysql02.bad@autosnap_2026-02-16_21:30:01_frequently
will destroy data02/jails/mysql02.bad@autosnap_2026-02-16_21:45:09_frequently
will destroy data02/jails/mysql02.bad@autosnap_2026-02-16_22:00:03_hourly
will destroy data02/jails/mysql02.bad@autosnap_2026-02-16_22:00:03_frequently
will destroy data02/jails/mysql02.bad@autosnap_2026-02-16_22:15:08_frequently
will destroy data02/jails/mysql02.bad@autosnap_2026-02-16_22:30:01_frequently
will destroy data02/jails/mysql02.bad@autosnap_2026-02-16_22:45:09_frequently
will destroy data02/jails/mysql02.bad@autosnap_2026-02-16_23:00:02_hourly
will destroy data02/jails/mysql02.bad@autosnap_2026-02-16_23:00:02_frequently
will destroy data02/jails/mysql02.bad@autosnap_2026-02-16_23:15:08_frequently
will destroy data02/jails/mysql02.bad@autosnap_2026-02-16_23:30:01_frequently
will destroy data02/jails/mysql02.bad@autosnap_2026-02-16_23:45:09_frequently
will destroy data02/jails/mysql02.bad@autosnap_2026-02-17_00:00:00_daily
will destroy data02/jails/mysql02.bad@autosnap_2026-02-17_00:00:00_hourly
will destroy data02/jails/mysql02.bad@autosnap_2026-02-17_00:00:00_frequently
will destroy data02/jails/mysql02.bad@autosnap_2026-02-17_00:15:10_frequently
will destroy data02/jails/mysql02.bad@autosnap_2026-02-17_00:30:00_frequently
will destroy data02/jails/mysql02.bad@autosnap_2026-02-17_00:45:10_frequently
will destroy data02/jails/mysql02.bad@autosnap_2026-02-17_01:00:02_hourly
will destroy data02/jails/mysql02.bad@autosnap_2026-02-17_01:00:02_frequently
will destroy data02/jails/mysql02.bad
[1:14 r730-01 dvl ~] % zfs list | grep mysql02
data02/jails/mysql02                                                5.39G  22.3G  7.85G  /jails/mysql02
[1:14 r730-01 dvl ~] % sudo zpool checkpoint -d data02              
Top

zpool upgrade blocked by gpart: /dev/da0p1: not enough space

Post by Dan Langille via Dan Langille's Other Diary »

This seems to be inconvenient. Now I have to rollback to that checkpoint.

[20:52 nagios03 dvl ~] % sudo zpool checkpoint zroot
[20:52 nagios03 dvl ~] % sudo zpool upgrade zroot
This system supports ZFS pool feature flags.

Enabled the following features on 'zroot':
  redaction_list_spill
  raidz_expansion
  fast_dedup
  longname
  large_microzap
  block_cloning_endian
  physical_rewrite

Pool 'zroot' has the bootfs property set, you might need to update
the boot code. See gptzfsboot(8) and loader.efi(8) for details.
[20:52 nagios03 dvl ~] % gpart show
=>      34  62984125  da0  GPT  (30G)
        34      2014       - free -  (1.0M)
      2048       345    1  freebsd-boot  (173K)
      2393     66584    2  efi  (33M)
     68977  62914560    3  freebsd-zfs  (30G)
  62983537       622       - free -  (311K)

[20:52 nagios03 dvl ~] % sudo gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 da0
gpart: /dev/da0p1: not enough space
[20:53 nagios03 dvl ~] % 

It seems the solution is to do:

zpool import --rewind-to-checkpoint zroot

The complicating factor: the host in question is an Azure VM.

I’ll come back to this. It’s more than I want to work on today. If the host reboots, it won’t come back: the boot code is wrong.

I think the easiest solution is to attach this drive to another VM and do the above import.

But this one worked

This host was done earlier. It has a 512K boot. That seems to work.

[21:03 r720-02 dvl ~] % gpart show ada0 ada1
=>       40  234441568  ada0  GPT  (112G)
         40       1024     1  freebsd-boot  (512K)
       1064        984        - free -  (492K)
       2048    4194304     2  freebsd-swap  (2.0G)
    4196352  230244352     3  freebsd-zfs  (110G)
  234440704        904        - free -  (452K)

=>       40  234441568  ada1  GPT  (112G)
         40       1024     1  freebsd-boot  (512K)
       1064        984        - free -  (492K)
       2048    4194304     2  freebsd-swap  (2.0G)
    4196352  230244352     3  freebsd-zfs  (110G)
  234440704        904        - free -  (452K)

[21:03 r720-02 dvl ~] % 

Similar hosts

These hosts will probably be OK:

[21:00 aws-1 dvl ~] % gpart show nda0
=>       40  524287920  nda0  GPT  (250G)
         40       1024     1  freebsd-boot  (512K)
       1064        984        - free -  (492K)
       2048  524285912     2  freebsd-zfs  (250G)


[21:05 gw01 dvl ~] % gpart show
=>        40  2000409184  nda0  GPT  (954G)
          40        1024     1  freebsd-boot  (512K)
        1064         984        - free -  (492K)
        2048   104857600     2  freebsd-swap  (50G)
   104859648  1895548928     3  freebsd-zfs  (904G)
  2000408576         648        - free -  (324K)

=>        40  2000409184  nda1  GPT  (954G)
          40        1024     1  freebsd-boot  (512K)
        1064         984        - free -  (492K)
        2048   104857600     2  freebsd-swap  (50G)
   104859648  1895548928     3  freebsd-zfs  (904G)
  2000408576         648        - free -  (324K)

[21:07 r730-01 dvl ~] % gpart show ada0 ada1
=>       40  242255584  ada0  GPT  (116G)
         40       2008        - free -  (1.0M)
       2048     409600     1  efi  (200M)
     411648   16777216     2  freebsd-swap  (8.0G)
   17188864  225066760     3  freebsd-zfs  (107G)

=>       40  242255584  ada1  GPT  (116G)
         40       2008        - free -  (1.0M)
       2048     409600     1  efi  (200M)
     411648   16777216     2  freebsd-swap  (8.0G)
   17188864  225066760     3  freebsd-zfs  (107G)


[21:08 r730-03 dvl ~] % gpart show ada0 ada1     
=>       40  937703008  ada0  GPT  (447G)
         40       1024     1  freebsd-boot  (512K)
       1064        984        - free -  (492K)
       2048   67108864     2  freebsd-swap  (32G)
   67110912  870590464     3  freebsd-zfs  (415G)
  937701376       1672        - free -  (836K)

=>       40  937703008  ada1  GPT  (447G)
         40       1024     1  freebsd-boot  (512K)
       1064        984        - free -  (492K)
       2048   67108864     2  freebsd-swap  (32G)
   67110912  870590464     3  freebsd-zfs  (415G)
  937701376       1672        - free -  (836K)


[21:01 tallboy dvl ~] % gpart show ada0 ada1
=>       34  976773101  ada0  GPT  (466G)
         34          6        - free -  (3.0K)
         40       1024     1  freebsd-boot  (512K)
       1064        984        - free -  (492K)
       2048   16777216     2  freebsd-swap  (8.0G)
   16779264  954204160     3  freebsd-zfs  (455G)
  970983424    5789711        - free -  (2.8G)

=>       34  976773101  ada1  GPT  (466G)
         34          6        - free -  (3.0K)
         40       1024     1  freebsd-boot  (512K)
       1064        984        - free -  (492K)
       2048   16777216     2  freebsd-swap  (8.0G)
   16779264  954204160     3  freebsd-zfs  (455G)
  970983424    5789711        - free -  (2.8G)


[20:42 zuul dvl ~] % gpart show
=>       40  976773088  ada0  GPT  (466G)
         40       1024     1  freebsd-boot  (512K)
       1064        984        - free -  (492K)
       2048   16777216     2  freebsd-swap  (8.0G)
   16779264  954204160     3  freebsd-zfs  (455G)
  970983424    5789704        - free -  (2.8G)

=>       40  976773088  ada1  GPT  (466G)
         40       1024     1  freebsd-boot  (512K)
       1064        984        - free -  (492K)
       2048   16777216     2  freebsd-swap  (8.0G)
   16779264  954204160     3  freebsd-zfs  (455G)
  970983424    5789704        - free -  (2.8G)



[21:06 x8dtu dvl ~] % gpart show ada0 ada1
=>       40  488397088  ada0  GPT  (233G)
         40       1024     1  freebsd-boot  (512K)
       1064        984        - free -  (492K)
       2048   41943040     2  freebsd-swap  (20G)
   41945088  446451712     3  freebsd-zfs  (213G)
  488396800        328        - free -  (164K)

=>       40  488397088  ada1  GPT  (233G)
         40       1024     1  freebsd-boot  (512K)
       1064        984        - free -  (492K)
       2048   41943040     2  freebsd-swap  (20G)
   41945088  446451712     3  freebsd-zfs  (213G)
  488396800        328        - free -  (164K)

Top

452 4.3.1 Insufficient system storage

Post by Dan Langille via Dan Langille's Other Diary »

This is a long post. There’s a lot of stuff in here. There’s no quick and dirty how-to. It’s a diagnostic record. Hope it helps.

This morning I saw log entries I’ve never noticed before. They seem to have started 9 hours ago. First, this email arrived.

cliff2 is one one two hosts behind cliff:

[7:25 pro05 dvl ~] % host cliff
cliff.int.unixathome.org has address 10.55.0.44
cliff.int.unixathome.org has address 10.55.0.14

In this post:

  • FreeBSD 15.0
  • Jan 18 18:34:48 cliff2 pkg[70008]: postfix upgraded: 3.10.3,1 -> 3.10.6,1
  • the host is r730-01 (that post was created before this host was upodated to FreeBSD 15.0)
  • Of the 16 emails received, they were all about cliff2, never cliff1.

    Date: Wed, 18 Feb 2026 12:22:30 +0000 (UTC)
    From: Mail Delivery System <MAILER-DAEMON@cliff2.int.unixathome.org>
    To: Postmaster <postmaster@cliff2.int.unixathome.org>
    Subject: Postfix SMTP server: errors from webserver.int.unixathome.org[10.55.0.3]
    Message-Id: <20260218122230.767072C399@cliff2.int.unixathome.org>
    
    Transcript of session follows.
    
     Out: 220 cliff2.int.unixathome.org ESMTP Postfix
     In:  EHLO webserver.int.unixathome.org
     Out: 250-cliff2.int.unixathome.org
     Out: 250-PIPELINING
     Out: 250-SIZE 10485760000
     Out: 250-ETRN
     Out: 250-STARTTLS
     Out: 250-ENHANCEDSTATUSCODES
     Out: 250-8BITMIME
     Out: 250-DSN
     Out: 250-SMTPUTF8
     Out: 250 CHUNKING
     In:  STARTTLS
     Out: 220 2.0.0 Ready to start TLS
     In:  EHLO webserver.int.unixathome.org
     Out: 250-cliff2.int.unixathome.org
     Out: 250-PIPELINING
     Out: 250-SIZE 10485760000
     Out: 250-ETRN
     Out: 250-ENHANCEDSTATUSCODES
     Out: 250-8BITMIME
     Out: 250-DSN
     Out: 250-SMTPUTF8
     Out: 250 CHUNKING
     In:  MAIL FROM:<nagios@webserver.int.unixathome.org>
     Out: 452 4.3.1 Insufficient system storage
    
    Session aborted, reason: lost connection
    
    For other details, see the local mail logfile
    

    There were also a few email from MAIL FROM: but most were like the above.

    Log entries

    The log entries for the sending host are:

    Feb 18 12:22:30 webserver dma[15199]: new mail from user=nagios uid=181 envelope_from=<nagios@webserver.int.unixathome.org>
    Feb 18 12:22:30 webserver dma[15199]: mail to=<dan@langille.org> queued as 15199.3f8617a420a0
    Feb 18 12:22:30 webserver dma[15199.3f8617a420a0][98813]: <dan@langille.org> trying delivery
    Feb 18 12:22:30 webserver dma[15199.3f8617a420a0][98813]: using smarthost (cliff.int.unixathome.org:25)
    Feb 18 12:22:30 webserver dma[15199.3f8617a420a0][98813]: trying remote delivery to cliff.int.unixathome.org [10.55.0.44] pref 0
    Feb 18 12:22:30 webserver dma[15199.3f8617a420a0][98813]: remote delivery deferred: cliff.int.unixathome.org [10.55.0.44] failed after MAIL FROM: 452 4.3.1 Insufficient system storage
    Feb 18 12:22:30 webserver dma[15199.3f8617a420a0][98813]: trying remote delivery to cliff.int.unixathome.org [10.55.0.14] pref 0
    Feb 18 12:22:30 webserver dma[15199.3f8617a420a0][98813]: <dan@langille.org> delivery successful
    

    On the receiving host (cliff2), I found:

    Feb 18 12:22:02 cliff2 postfix/smtpd[95569]: disconnect from webserver.int.unixathome.org[10.55.0.3] helo=1 quit=1 commands=2
    Feb 18 12:22:30 cliff2 postfix/smtpd[95569]: connect from webserver.int.unixathome.org[10.55.0.3]
    Feb 18 12:22:30 cliff2 postfix/smtpd[95569]: Anonymous TLS connection established from webserver.int.unixathome.org[10.55.0.3]: TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange x25519 server-signature RSA-PSS (2048 bits) server-digest SHA256
    Feb 18 12:22:30 cliff2 postfix/smtpd[95569]: NOQUEUE: reject: MAIL from webserver.int.unixathome.org[10.55.0.3]: 452 4.3.1 Insufficient system storage; proto=ESMTP helo=<webserver.int.unixathome.org>
    Feb 18 12:22:30 cliff2 postfix/smtpd[95569]: warning: not enough free space in mail queue: 15472168960 bytes < 1.5*message size limit
    Feb 18 12:22:30 cliff2 postfix/smtpd[95569]: NOQUEUE: lost connection after MAIL from webserver.int.unixathome.org[10.55.0.3]
    Feb 18 12:22:30 cliff2 postfix/cleanup[98873]: 767072C399: message-id=<20260218122230.767072C399@cliff2.int.unixathome.org>
    Feb 18 12:22:30 cliff2 postfix/smtpd[95569]: disconnect from webserver.int.unixathome.org[10.55.0.3] ehlo=2 starttls=1 mail=0/1 commands=3/4
    Feb 18 12:22:30 cliff2 postfix/qmgr[7445]: 767072C399: from=<double-bounce@cliff2.int.unixathome.org>, size=1277, nrcpt=1 (queue active)
    Feb 18 12:22:30 cliff2 postfix/cleanup[98873]: 79B732C911: message-id=<20260218122230.767072C399@cliff2.int.unixathome.org>
    Feb 18 12:22:30 cliff2 postfix/local[98877]: 767072C399: to=<postmaster@cliff2.int.unixathome.org>, orig_to=<postmaster>, relay=local, delay=0.02, delays=0.01/0.01/0/0, dsn=2.0.0, status=sent (forwarded as 79B732C911)
    Feb 18 12:22:30 cliff2 postfix/qmgr[7445]: 79B732C911: from=<double-bounce@cliff2.int.unixathome.org>, size=1434, nrcpt=1 (queue active)
    Feb 18 12:22:30 cliff2 postfix/qmgr[7445]: 767072C399: removed
    Feb 18 12:22:30 cliff2 postfix/smtp[98880]: Untrusted TLS connection established to smtp.fastmail.com[103.168.172.60]:587: TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange x25519
    Feb 18 12:22:30 cliff2 postfix/smtp[98880]: 79B732C911: to=<dan@langille.org>, orig_to=<postmaster>, relay=smtp.fastmail.com[103.168.172.60]:587, delay=0.36, delays=0/0.01/0.11/0.23, dsn=2.0.0, status=sent (250 2.0.0 Ok: queued as CF51A800C2 ti_phl-compute-08_3001043_1771417350_11 via phl-compute-08)
    Feb 18 12:22:30 cliff2 postfix/qmgr[7445]: 79B732C911: removed
    

    The mail queues on both hosts are empty.

    Is it really a space issue?

    No, it’s a mail configuration issue. See line 5, highlighted above.

    That message repeats:

    [12:35 cliff2 dvl ~] % sudo grep 'warning: not enough free space in mail queue'  /var/log/maillog
    Feb 18 04:46:54 cliff2 postfix/smtpd[11796]: warning: not enough free space in mail queue: 15723220992 bytes < 1.5*message size limit
    Feb 18 04:47:35 cliff2 postfix/smtpd[11796]: warning: not enough free space in mail queue: 15720923136 bytes < 1.5*message size limit
    Feb 18 05:47:10 cliff2 postfix/smtpd[84061]: warning: not enough free space in mail queue: 15652507648 bytes < 1.5*message size limit
    Feb 18 06:31:40 cliff2 postfix/smtpd[3824]: warning: not enough free space in mail queue: 15704023040 bytes < 1.5*message size limit
    Feb 18 06:31:50 cliff2 postfix/smtpd[3824]: warning: not enough free space in mail queue: 15688192000 bytes < 1.5*message size limit
    Feb 18 06:32:20 cliff2 postfix/smtpd[3824]: warning: not enough free space in mail queue: 15684222976 bytes < 1.5*message size limit
    Feb 18 07:35:50 cliff2 postfix/smtpd[90728]: warning: not enough free space in mail queue: 15561146368 bytes < 1.5*message size limit
    Feb 18 07:40:51 cliff2 postfix/smtpd[10376]: warning: not enough free space in mail queue: 15549181952 bytes < 1.5*message size limit
    Feb 18 08:27:51 cliff2 postfix/smtpd[32105]: warning: not enough free space in mail queue: 15599861760 bytes < 1.5*message size limit
    Feb 18 08:27:51 cliff2 postfix/smtpd[38446]: warning: not enough free space in mail queue: 15599861760 bytes < 1.5*message size limit
    Feb 18 08:31:10 cliff2 postfix/smtpd[49465]: warning: not enough free space in mail queue: 15538733056 bytes < 1.5*message size limit
    Feb 18 08:31:41 cliff2 postfix/smtpd[49465]: warning: not enough free space in mail queue: 15529779200 bytes < 1.5*message size limit
    Feb 18 10:22:30 cliff2 postfix/smtpd[2402]: warning: not enough free space in mail queue: 15584747520 bytes < 1.5*message size limit
    Feb 18 11:22:30 cliff2 postfix/smtpd[6039]: warning: not enough free space in mail queue: 15494496256 bytes < 1.5*message size limit
    Feb 18 12:02:02 cliff2 postfix/smtpd[6761]: warning: not enough free space in mail queue: 15571550208 bytes < 1.5*message size limit
    Feb 18 12:22:30 cliff2 postfix/smtpd[95569]: warning: not enough free space in mail queue: 15472168960 bytes < 1.5*message size limit
    [12:35 cliff2 dvl ~] % 
    

    It seems someone wants to send a bigger message. 15GB email? I don’t think so. You’re not sending that. At first, I thought, I’ll just boost the limit. In this case, no. Don’t send me that email.

    I suspect the email is from Nagios. I suspect the email is generated by a notification and for some reason, it has exploded.

    Nagios restart

    I’ve discovered that Nagios restart reproduces the problem.

    Let’s deliver that email locally by commenting out this line from /etc/mail/aliases:

    #root:  dan@langille.org
    

    … and restarting Nagios did not generate the issue.

    What is also interesting, I’m seeing similar reports such as Postfix SMTP server: errors from unifi01.int.unixathome.org[10.55.0.131] – yet the mail log on unifi01 is empty. Logging is working:

    [16:38 unifi01 dvl ~] % sudo cat /var/log/maillog    
    Feb 18 00:00:00 unifi01 newsyslog[98880]: logfile turned over
    Feb 18 16:38:45 unifi01 dma[4d136][84888]: new mail from user=dvl uid=1002 envelope_from=<dvl@unifi01.int.unixathome.org>
    Feb 18 16:38:45 unifi01 dma[4d136][84888]: mail to=<dan@langille.org> queued as 4d136.372c11442000
    Feb 18 16:38:45 unifi01 dma[4d136.372c11442000][84890]: <dan@langille.org> trying delivery
    Feb 18 16:38:45 unifi01 dma[4d136.372c11442000][84890]: using smarthost (cliff.int.unixathome.org:25)
    Feb 18 16:38:45 unifi01 dma[4d136.372c11442000][84890]: trying remote delivery to cliff.int.unixathome.org [10.55.0.14] pref 0
    Feb 18 16:38:45 unifi01 dma[4d136.372c11442000][84890]: <dan@langille.org> delivery successful
    

    Yet, cliff2 says mail came from there:

    Feb 18 16:23:41 cliff2 postfix/smtpd[28059]: connect from unifi01.int.unixathome.org[10.55.0.131]
    Feb 18 16:23:41 cliff2 postfix/smtpd[28059]: Anonymous TLS connection established from unifi01.int.unixathome.org[10.55.0.131]: TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (25
    6/256 bits) key-exchange x25519 server-signature RSA-PSS (2048 bits) server-digest SHA256
    Feb 18 16:23:41 cliff2 postfix/smtpd[28059]: NOQUEUE: reject: MAIL from unifi01.int.unixathome.org[10.55.0.131]: 452 4.3.1 Insufficient system storage; proto=ESMTP helo=<unif
    i01.int.unixathome.org>
    Feb 18 16:23:41 cliff2 postfix/smtpd[28059]: warning: not enough free space in mail queue: 15225958400 bytes < 1.5*message size limit
    Feb 18 16:23:41 cliff2 postfix/cleanup[28068]: EF82B2C5A0: message-id=<20260218162341.EF82B2C5A0@cliff2.int.unixathome.org>
    Feb 18 16:23:41 cliff2 postfix/smtpd[28059]: disconnect from unifi01.int.unixathome.org[10.55.0.131] ehlo=2 starttls=1 mail=0/1 rset=1 quit=1 commands=5/6
    Feb 18 16:23:41 cliff2 postfix/qmgr[7445]: EF82B2C5A0: from=<double-bounce@cliff2.int.unixathome.org>, size=1278, nrcpt=1 (queue active)
    Feb 18 16:23:41 cliff2 postfix/cleanup[28068]: F2A122BB68: message-id=<20260218162341.EF82B2C5A0@cliff2.int.unixathome.org>
    Feb 18 16:23:41 cliff2 postfix/local[28069]: EF82B2C5A0: to=<postmaster@cliff2.int.unixathome.org>, orig_to=<postmaster>, relay=local, delay=0.02, delays=0.01/0.01/0/0, dsn=2.0.0, status=sent (forwarded as F2A122BB68)
    Feb 18 16:23:41 cliff2 postfix/qmgr[7445]: F2A122BB68: from=<double-bounce@cliff2.int.unixathome.org>, size=1435, nrcpt=1 (queue active)
    Feb 18 16:23:41 cliff2 postfix/qmgr[7445]: EF82B2C5A0: removed
    Feb 18 16:23:42 cliff2 postfix/smtp[28070]: Untrusted TLS connection established to smtp.fastmail.com[103.168.172.60]:587: TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange x25519 server-signature RSA-PSS (2048 bits) server-digest SHA256
    Feb 18 16:23:42 cliff2 postfix/smtp[28070]: F2A122BB68: to=<dan@langille.org>, orig_to=<postmaster>, relay=smtp.fastmail.com[103.168.172.60]:587, delay=0.65, delays=0/0.01/0.21/0.42, dsn=2.0.0, status=sent (250 2.0.0 Ok: queued as 8FF3E80104 ti_phl-compute-08_3096974_1771431822_1 via phl-compute-08)
    Feb 18 16:23:42 cliff2 postfix/qmgr[7445]: F2A122BB68: removed
    

    Now I’m wonderng if this is a Postfix configuration issue?

    I did a postconf -n difference between the two hosts:

    [12:00 pro05 dvl ~/tmp] % diff -ruN  cliff1 cliff2
    --- cliff1	2026-02-18 12:00:00
    +++ cliff2	2026-02-18 11:59:20
    @@ -1,4 +1,4 @@
    -[16:59 cliff1 dvl ~] % postconf -n
    +[16:58 cliff2 dvl ~] % postconf -n
     alias_maps = hash:/etc/mail/aliases
     command_directory = /usr/local/sbin
     compatibility_level = 3.6
    @@ -17,7 +17,7 @@
     message_size_limit = 10485760000
     meta_directory = /usr/local/libexec/postfix
     mydestination = $myhostname, localhost.$mydomain, localhost
    -myhostname = cliff1.int.unixathome.org
    +myhostname = cliff2.int.unixathome.org
     mynetworks_style = class
     newaliases_path = /usr/local/bin/newaliases
     queue_directory = /var/spool/postfix
    [12:00 pro05 dvl ~/tmp] % 
    

    Nothing.

    Another source

    Feb 18 18:02:02 dns-hidden-master dma[3ad8b][35058]: new mail from user=logcheck uid=915 envelope_from=<logcheck@dns-hidden-master.int.unixathome.org>
    Feb 18 18:02:02 dns-hidden-master dma[3ad8b][35058]: mail to=<dan@langille.org> queued as 3ad8b.51b887042000
    Feb 18 18:02:02 dns-hidden-master dma[3ad8b.51b887042000][35110]: <dan@langille.org> trying delivery
    Feb 18 18:02:02 dns-hidden-master dma[3ad8b.51b887042000][35110]: using smarthost (cliff.int.unixathome.org:25)
    Feb 18 18:02:02 dns-hidden-master dma[3ad8b.51b887042000][35110]: trying remote delivery to cliff.int.unixathome.org [10.55.0.44] pref 0
    Feb 18 18:02:02 dns-hidden-master dma[3ad8b.51b887042000][35110]: remote delivery deferred: cliff.int.unixathome.org [10.55.0.44] failed after MAIL FROM: 452 4.3.1 Insufficient system storage
    Feb 18 18:02:02 dns-hidden-master dma[3ad8b.51b887042000][35110]: trying remote delivery to cliff.int.unixathome.org [10.55.0.14] pref 0
    

    Then I tried manually from that same source:

    [18:03 dns-hidden-master dvl ~] % echo testing | mail dan@langille.org
    

    The log entries:

    Feb 18 18:03:51 dns-hidden-master dma[39652]: new mail from user=dvl uid=1002 envelope_from=<dvl@dns-hidden-master.int.unixathome.org>
    Feb 18 18:03:51 dns-hidden-master dma[39652]: mail to=<dan@langille.org> queued as 39652.3ee88ac42000
    Feb 18 18:03:51 dns-hidden-master dma[39652.3ee88ac42000][45462]: <dan@langille.org> trying delivery
    Feb 18 18:03:51 dns-hidden-master dma[39652.3ee88ac42000][45462]: using smarthost (cliff.int.unixathome.org:25)
    Feb 18 18:03:51 dns-hidden-master dma[39652.3ee88ac42000][45462]: trying remote delivery to cliff.int.unixathome.org [10.55.0.44] pref 0
    Feb 18 18:03:51 dns-hidden-master dma[39652.3ee88ac42000][45462]: remote delivery deferred: cliff.int.unixathome.org [10.55.0.44] failed after MAIL FROM: 452 4.3.1 Insufficient system storage
    Feb 18 18:03:51 dns-hidden-master dma[39652.3ee88ac42000][45462]: trying remote delivery to cliff.int.unixathome.org [10.55.0.14] pref 0
    Feb 18 18:03:51 dns-hidden-master dma[39652.3ee88ac42000][45462]: <dan@langille.org> delivery successful
    

    So, a very small message is triggering the event. I tried again. It is repeatable.

    Simple telnet

    telnet is a time honored tool. The following highlighted lines are the ones I typed.

    [18:24 mydev dvl ~] % telnet cliff2 25
    Trying 10.55.0.44...
    Connected to cliff2.int.unixathome.org.
    Escape character is '^]'.
    220 cliff2.int.unixathome.org ESMTP Postfix
    EHLO mydev                          
    250-cliff2.int.unixathome.org
    250-PIPELINING
    250-SIZE 10485760000
    250-ETRN
    250-STARTTLS
    250-ENHANCEDSTATUSCODES
    250-8BITMIME
    250-DSN
    250-SMTPUTF8
    250 CHUNKING
    MAIL FROM:dan@langille.org
    452 4.3.1 Insufficient system storage
    

    So, that’s the same as the stuff at the top of this post, apart from STARTTLS which I was not prepared to from within telnet.

    I tried this with cliff1 – flawless. All good there.

    Restarting postfix, then the jail

    I’m sure it’s not space:

    [16:54 r730-01 dvl ~] % zpool list
    NAME     SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
    data01  5.81T  6.36G  5.81T        -         -     2%     0%  1.00x    ONLINE  -
    data02   928G   705G   223G        -         -    67%    75%  1.00x    ONLINE  -
    data03  7.25T  1.30T  5.95T        -         -    49%    17%  1.00x    ONLINE  -
    data04  29.1T  6.11T  23.0T        -         -     0%    21%  1.00x    ONLINE  -
    zroot    107G  47.9G  59.1G        -         -    54%    44%  1.00x    ONLINE  -
    [18:30 r730-01 dvl ~] % zfs list | grep cliff2
    data02/jails/cliff2                                                 4.00G  13.6G  2.46G  /jails/cliff2
    

    It’s all just one filesystem in there:

    [18:36 cliff2 dvl ~] % zfs list
    no datasets available
    [18:36 cliff2 dvl ~] % df -h
    Filesystem             Size    Used   Avail Capacity  Mounted on
    data02/jails/cliff2     16G    2.5G     14G    15%    /
    

    Let’s try:

    [18:30 cliff2 dvl ~] % sudo service postfix restart
    postfix/postfix-script: stopping the Postfix mail system
    postfix/postfix-script: starting the Postfix mail system
    [18:30 cliff2 dvl ~] % 
    

    That didn’t help.

    I tried a jail restart, did not help:

    [18:31 r730-01 dvl ~] % sudo service jail restart cliff2
    Stopping jails: cliff2.
    Starting jails: cliff2.
    

    Let’s try snapshots:

    [18:37 r730-01 dvl ~] % zfs list -r -t snapshot data02/jails/cliff2 | grep -v @autosnap
    NAME                                                          USED  AVAIL  REFER  MOUNTPOINT
    data02/jails/cliff2@mkjail-202509051453                       725M      -  2.24G  -
    data02/jails/cliff2@mkjail-202602101242                       716M      -  2.38G  -
    

    Let’s destroy those two:

    [18:37 r730-01 dvl ~] % sudo zfs destroy data02/jails/cliff2@mkjail-202509051453
    [18:37 r730-01 dvl ~] % sudo zfs destroy data02/jails/cliff2@mkjail-202602101242
    

    Umm, that seems to have fixed it:

    [18:39 mydev dvl ~] % telnet cliff2 25
    Trying 10.55.0.44...
    Connected to cliff2.int.unixathome.org.
    Escape character is '^]'.
    220 cliff2.int.unixathome.org ESMTP Postfix
    EHLO mydev
    250-cliff2.int.unixathome.org
    250-PIPELINING
    250-SIZE 10485760000
    250-ETRN
    250-STARTTLS
    250-ENHANCEDSTATUSCODES
    250-8BITMIME
    250-DSN
    250-SMTPUTF8
    250 CHUNKING
    MAIL FROM:dan@langille.org
    250 2.1.0 Ok
    

    WTF?

    [18:39 r730-01 dvl ~] % zfs list data02/jails/cliff2
    NAME                  USED  AVAIL  REFER  MOUNTPOINT
    data02/jails/cliff2  2.53G  15.0G  2.46G  /jails/cliff2
    [18:39 r730-01 dvl ~] % zpool list data02
    NAME     SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
    data02   928G   704G   224G        -         -    66%    75%  1.00x    ONLINE  -
    [18:40 r730-01 dvl ~] % 
    

    No quota or reservation

    The pool had 223G free at the start of this.

    Let’s look around…

    [18:40 r730-01 dvl ~] % zfs get all data02/jails/cliff2 | grep quota
    data02/jails/cliff2  quota                   none                      default
    data02/jails/cliff2  refquota                none                      default
    data02/jails/cliff2  defaultuserquota        0                         -
    data02/jails/cliff2  defaultgroupquota       0                         -
    data02/jails/cliff2  defaultprojectquota     0                         -
    data02/jails/cliff2  defaultuserobjquota     0                         -
    data02/jails/cliff2  defaultgroupobjquota    0                         -
    data02/jails/cliff2  defaultprojectobjquota  0                         -
    [18:41 r730-01 dvl ~] % zfs get all data02/jails | grep quota 
    data02/jails  quota                   none                      default
    data02/jails  refquota                none                      default
    data02/jails  defaultuserquota        0                         -
    data02/jails  defaultgroupquota       0                         -
    data02/jails  defaultprojectquota     0                         -
    data02/jails  defaultuserobjquota     0                         -
    data02/jails  defaultgroupobjquota    0                         -
    data02/jails  defaultprojectobjquota  0                         -
    [18:41 r730-01 dvl ~] % zfs get all data02 | grep quota 
    data02  quota                   none                      default
    data02  refquota                none                      default
    data02  defaultuserquota        0                         -
    data02  defaultgroupquota       0                         -
    data02  defaultprojectquota     0                         -
    data02  defaultuserobjquota     0                         -
    data02  defaultgroupobjquota    0                         -
    data02  defaultprojectobjquota  0                         -
    [18:41 r730-01 dvl ~] % zfs get all data02 | grep reserve
    [18:41 r730-01 dvl ~] % zfs get all data02 | grep res    
    data02  compressratio           1.79x                     -
    data02  reservation             none                      default
    data02  compression             zstd                      received
    data02  aclinherit              restricted                default
    data02  sharesmb                off                       default
    data02  refreservation          none                      default
    data02  usedbyrefreservation    0B                        -
    data02  refcompressratio        1.00x                     -
    [18:41 r730-01 dvl ~] % zfs get all data02/jails/cliff2 | grep reservation
    data02/jails/cliff2  reservation             none                      default
    data02/jails/cliff2  refreservation          none                      default
    data02/jails/cliff2  usedbyrefreservation    0B                        -
    [18:42 r730-01 dvl ~] % zfs get all data02/jails | grep reservation 
    data02/jails  reservation             none                      default
    data02/jails  refreservation          none                      default
    data02/jails  usedbyrefreservation    0B                        -
    [18:42 r730-01 dvl ~] % zfs get all data02 | grep reservation 
    data02  reservation             none                      default
    data02  refreservation          none                      default
    data02  usedbyrefreservation    0B                        -
    [18:42 r730-01 dvl ~] % 
    

    So, what was blocking this?

    zpool status

    As an afterthought:

    [18:42 r730-01 dvl ~] % zpool status
      pool: data01
     state: ONLINE
    status: Some supported and requested features are not enabled on the pool.
    	The pool can still be used, but some features are unavailable.
    action: Enable all features using 'zpool upgrade'. Once this is done,
    	the pool may no longer be accessible by software that does not support
    	the features. See zpool-features(7) for details.
      scan: scrub repaired 0B in 00:00:06 with 0 errors on Thu Feb 12 03:53:00 2026
    config:
    
    	NAME                  STATE     READ WRITE CKSUM
    	data01                ONLINE       0     0     0
    	  raidz2-0            ONLINE       0     0     0
    	    gpt/Y7P0A022TEVE  ONLINE       0     0     0
    	    gpt/Y7P0A02ATEVE  ONLINE       0     0     0
    	    gpt/Y7P0A02DTEVE  ONLINE       0     0     0
    	    gpt/Y7P0A02GTEVE  ONLINE       0     0     0
    	    gpt/Y7P0A02LTEVE  ONLINE       0     0     0
    	    gpt/Y7P0A02MTEVE  ONLINE       0     0     0
    	    gpt/Y7P0A02QTEVE  ONLINE       0     0     0
    	    gpt/Y7P0A033TEVE  ONLINE       0     0     0
    
    errors: No known data errors
    
      pool: data02
     state: ONLINE
    status: Some supported and requested features are not enabled on the pool.
    	The pool can still be used, but some features are unavailable.
    action: Enable all features using 'zpool upgrade'. Once this is done,
    	the pool may no longer be accessible by software that does not support
    	the features. See zpool-features(7) for details.
      scan: scrub repaired 0B in 00:08:44 with 0 errors on Wed Feb 18 04:03:38 2026
    config:
    
    	NAME                     STATE     READ WRITE CKSUM
    	data02                   ONLINE       0     0     0
    	  mirror-0               ONLINE       0     0     0
    	    gpt/S6WSNJ0T208743F  ONLINE       0     0     0
    	    gpt/S6WSNJ0T207774T  ONLINE       0     0     0
    
    errors: No known data errors
    
      pool: data03
     state: ONLINE
    status: Some supported and requested features are not enabled on the pool.
    	The pool can still be used, but some features are unavailable.
    action: Enable all features using 'zpool upgrade'. Once this is done,
    	the pool may no longer be accessible by software that does not support
    	the features. See zpool-features(7) for details.
      scan: scrub repaired 0B in 00:49:32 with 0 errors on Thu Feb 12 04:42:48 2026
    config:
    
    	NAME                     STATE     READ WRITE CKSUM
    	data03                   ONLINE       0     0     0
    	  mirror-0               ONLINE       0     0     0
    	    gpt/WD_22492H800867  ONLINE       0     0     0
    	    gpt/WD_230151801284  ONLINE       0     0     0
    	  mirror-1               ONLINE       0     0     0
    	    gpt/WD_230151801478  ONLINE       0     0     0
    	    gpt/WD_230151800473  ONLINE       0     0     0
    
    errors: No known data errors
    
      pool: data04
     state: ONLINE
    status: Some supported and requested features are not enabled on the pool.
    	The pool can still be used, but some features are unavailable.
    action: Enable all features using 'zpool upgrade'. Once this is done,
    	the pool may no longer be accessible by software that does not support
    	the features. See zpool-features(7) for details.
      scan: scrub repaired 0B in 00:48:58 with 0 errors on Wed Feb 18 04:44:04 2026
    config:
    
    	NAME                     STATE     READ WRITE CKSUM
    	data04                   ONLINE       0     0     0
    	  raidz2-0               ONLINE       0     0     0
    	    gpt/S7KGNU0Y722875X  ONLINE       0     0     0
    	    gpt/S7KGNU0Y915666E  ONLINE       0     0     0
    	    gpt/S7KGNU0Y912937J  ONLINE       0     0     0
    	    gpt/S7KGNU0Y912955D  ONLINE       0     0     0
    	    gpt/S7U8NJ0Y716854P  ONLINE       0     0     0
    	    gpt/S7U8NJ0Y716801F  ONLINE       0     0     0
    	    gpt/S757NS0Y700758M  ONLINE       0     0     0
    	    gpt/S757NS0Y700760R  ONLINE       0     0     0
    
    errors: No known data errors
    
      pool: zroot
     state: ONLINE
    status: Some supported and requested features are not enabled on the pool.
    	The pool can still be used, but some features are unavailable.
    action: Enable all features using 'zpool upgrade'. Once this is done,
    	the pool may no longer be accessible by software that does not support
    	the features. See zpool-features(7) for details.
      scan: scrub repaired 0B in 00:02:22 with 0 errors on Tue Feb 17 04:03:50 2026
    config:
    
    	NAME                               STATE     READ WRITE CKSUM
    	zroot                              ONLINE       0     0     0
    	  mirror-0                         ONLINE       0     0     0
    	    gpt/zfs0_20170718AA0000185556  ONLINE       0     0     0
    	    gpt/zfs1_20170719AA1178164201  ONLINE       0     0     0
    
    errors: No known data errors
    

    Cause found

    The cause has been found. It was another dataset, with a reservation. It was discussed during the ZFS production users call. They asked me a few questions, I tried a few times, etc.

    First, write some data.

    [22:25 cliff2 dvl ~] % sudo dd if=/dev/random of=/tmp/delete=me bs=1M count=20000 status=progress
    dd: /tmp/delete=me: No space left on devicerred 76.001s, 220 MB/s   
    
    16172+0 records in
    16171+1 records out
    16956915712 bytes transferred in 79.099811 secs (214373656 bytes/sec)
    

    OK, it filled up. Next, we looked at this.

    [22:31 r730-01 dvl ~] % zpool status data02
      pool: data02
     state: ONLINE
    status: Some supported and requested features are not enabled on the pool.
    	The pool can still be used, but some features are unavailable.
    action: Enable all features using 'zpool upgrade'. Once this is done,
    	the pool may no longer be accessible by software that does not support
    	the features. See zpool-features(7) for details.
      scan: scrub repaired 0B in 00:08:44 with 0 errors on Wed Feb 18 04:03:38 2026
    config:
    
    	NAME                     STATE     READ WRITE CKSUM
    	data02                   ONLINE       0     0     0
    	  mirror-0               ONLINE       0     0     0
    	    gpt/S6WSNJ0T208743F  ONLINE       0     0     0
    	    gpt/S6WSNJ0T207774T  ONLINE       0     0     0
    
    errors: No known data errors
    [22:31 r730-01 dvl ~] % zfs list data02/jails
    NAME           USED  AVAIL  REFER  MOUNTPOINT
    data02/jails   363G     0B  9.54G  /jails
    [22:32 r730-01 dvl ~] % zfs list -r data02
    NAME                                                                 USED  AVAIL  REFER  MOUNTPOINT
    data02                                                               900G     0B    96K  none
    data02/freshports                                                    294G     0B    88K  none
    data02/freshports/dev-ingress01                                      228G     0B    88K  none
    data02/freshports/dev-ingress01/dvl-src                              197G     0B   197G  /jails/dev-ingress01/usr/home/dvl/src
    data02/freshports/dev-ingress01/freshports                          22.5G     0B  2.06G  /jails/dev-ingress01/var/db/freshports
    data02/freshports/dev-ingress01/freshports/cache                    2.11M     0B   132K  /jails/dev-ingress01/var/db/freshports/cache
    data02/freshports/dev-ingress01/freshports/cache/html               1.88M     0B  1.88M  /jails/dev-ingress01/var/db/freshports/cache/html
    data02/freshports/dev-ingress01/freshports/cache/spooling            104K     0B   104K  /jails/dev-ingress01/var/db/freshports/cache/spooling
    data02/freshports/dev-ingress01/freshports/message-queues           20.5G     0B  11.7M  /jails/dev-ingress01/var/db/freshports/message-queues
    data02/freshports/dev-ingress01/freshports/message-queues/archive   20.4G     0B  11.3G  /jails/dev-ingress01/var/db/freshports/message-queues/archive
    data02/freshports/dev-ingress01/ingress                             5.46G     0B   132K  /jails/dev-ingress01/var/db/ingress
    data02/freshports/dev-ingress01/ingress/latest_commits               528K     0B   108K  /jails/dev-ingress01/var/db/ingress/latest_commits
    data02/freshports/dev-ingress01/ingress/message-queues              1.44M     0B   628K  /jails/dev-ingress01/var/db/ingress/message-queues
    data02/freshports/dev-ingress01/ingress/repos                       5.45G     0B   120K  /jails/dev-ingress01/var/db/ingress/repos
    data02/freshports/dev-ingress01/ingress/repos/doc                    525M     0B   522M  /jails/dev-ingress01/var/db/ingress/repos/doc
    data02/freshports/dev-ingress01/ingress/repos/ports                 2.28G     0B  2.27G  /jails/dev-ingress01/var/db/ingress/repos/ports
    data02/freshports/dev-ingress01/ingress/repos/src                   2.66G     0B  2.65G  /jails/dev-ingress01/var/db/ingress/repos/src
    data02/freshports/dev-ingress01/jails                               3.00G     0B   104K  /jails/dev-ingress01/jails
    data02/freshports/dev-ingress01/jails/freshports                    3.00G     0B   405M  /jails/dev-ingress01/jails/freshports
    data02/freshports/dev-ingress01/jails/freshports/ports              2.61G     0B  2.61G  /jails/dev-ingress01/jails/freshports/usr/ports
    data02/freshports/dev-ingress01/modules                             4.38M     0B  4.38M  /jails/dev-ingress01/usr/local/lib/perl5/site_perl/FreshPorts
    data02/freshports/dev-ingress01/scripts                             3.30M     0B  3.30M  /jails/dev-ingress01/usr/local/libexec/freshports
    data02/freshports/dev-nginx01                                       54.6M     0B    96K  none
    data02/freshports/dev-nginx01/www                                   54.5M     0B    96K  /jails/dev-nginx01/usr/local/www
    data02/freshports/dev-nginx01/www/freshports                        51.7M     0B  51.7M  /jails/dev-nginx01/usr/local/www/freshports
    data02/freshports/dev-nginx01/www/freshsource                       2.71M     0B  2.71M  /jails/dev-nginx01/usr/local/www/freshsource
    data02/freshports/dvl-ingress01                                     18.9G     0B    96K  none
    data02/freshports/dvl-ingress01/dvl-src                             80.3M     0B  80.3M  /jails/dvl-ingress01/usr/home/dvl/src
    data02/freshports/dvl-ingress01/freshports                          4.38G     0B    96K  /jails/dvl-ingress01/var/db/freshports
    data02/freshports/dvl-ingress01/freshports/cache                    2.31M     0B    96K  /jails/dvl-ingress01/var/db/freshports/cache
    data02/freshports/dvl-ingress01/freshports/cache/html               2.01M     0B  1.93M  /jails/dvl-ingress01/var/db/freshports/cache/html
    data02/freshports/dvl-ingress01/freshports/cache/spooling            208K     0B   208K  /jails/dvl-ingress01/var/db/freshports/cache/spooling
    data02/freshports/dvl-ingress01/freshports/message-queues           4.38G     0B  16.8M  /jails/dvl-ingress01/var/db/freshports/message-queues
    data02/freshports/dvl-ingress01/freshports/message-queues/archive   4.37G     0B  4.37G  /jails/dvl-ingress01/var/db/freshports/message-queues/archive
    data02/freshports/dvl-ingress01/ingress                             8.60G     0B   140K  /jails/dvl-ingress01/var/db/ingress
    data02/freshports/dvl-ingress01/ingress/latest_commits               100K     0B   100K  /jails/dvl-ingress01/var/db/ingress/latest_commits
    data02/freshports/dvl-ingress01/ingress/message-queues               160K     0B   160K  /jails/dvl-ingress01/var/db/ingress/message-queues
    data02/freshports/dvl-ingress01/ingress/repos                       8.60G     0B   112K  /jails/dvl-ingress01/var/db/ingress/repos
    data02/freshports/dvl-ingress01/ingress/repos/doc                    954M     0B   520M  /jails/dvl-ingress01/var/db/ingress/repos/doc
    data02/freshports/dvl-ingress01/ingress/repos/ports                 3.43G     0B  2.22G  /jails/dvl-ingress01/var/db/ingress/repos/ports
    data02/freshports/dvl-ingress01/ingress/repos/src                   4.24G     0B  2.56G  /jails/dvl-ingress01/var/db/ingress/repos/src
    data02/freshports/dvl-ingress01/jails                               5.83G     0B   104K  /jails/dvl-ingress01/jails
    data02/freshports/dvl-ingress01/jails/freshports                    5.83G     0B   404M  /jails/dvl-ingress01/jails/freshports
    data02/freshports/dvl-ingress01/jails/freshports/ports              5.43G     0B  2.64G  /jails/dvl-ingress01/jails/freshports/usr/ports
    data02/freshports/dvl-ingress01/modules                             2.67M     0B  2.67M  /jails/dvl-ingress01/usr/local/lib/perl5/site_perl/FreshPorts
    data02/freshports/dvl-ingress01/scripts                             2.34M     0B  2.34M  /jails/dvl-ingress01/usr/local/libexec/freshports
    data02/freshports/dvl-nginx01                                       22.2M     0B    96K  none
    data02/freshports/dvl-nginx01/www                                   22.1M     0B    96K  none
    data02/freshports/dvl-nginx01/www/freshports                        20.2M     0B  20.2M  /jails/dvl-nginx01/usr/local/www/freshports
    data02/freshports/dvl-nginx01/www/freshsource                       1.78M     0B  1.78M  /jails/dvl-nginx01/usr/local/www/freshsource
    data02/freshports/jailed                                            3.97G     0B    96K  none
    data02/freshports/jailed/dev-ingress01                                96K     0B    96K  none
    data02/freshports/jailed/dev-nginx01                                1.36G     0B    96K  none
    data02/freshports/jailed/dev-nginx01/cache                          1.36G     0B    96K  /var/db/freshports/cache
    data02/freshports/jailed/dev-nginx01/cache/categories               1.28M     0B  1.20M  /var/db/freshports/cache/categories
    data02/freshports/jailed/dev-nginx01/cache/commits                    96K     0B    96K  /var/db/freshports/cache/commits
    data02/freshports/jailed/dev-nginx01/cache/daily                    12.0M     0B  11.9M  /var/db/freshports/cache/daily
    data02/freshports/jailed/dev-nginx01/cache/general                  4.38M     0B  4.30M  /var/db/freshports/cache/general
    data02/freshports/jailed/dev-nginx01/cache/news                      184K     0B    96K  /var/db/freshports/cache/news
    data02/freshports/jailed/dev-nginx01/cache/packages                 5.19M     0B  5.10M  /var/db/freshports/cache/packages
    data02/freshports/jailed/dev-nginx01/cache/pages                      96K     0B    96K  /var/db/freshports/cache/pages
    data02/freshports/jailed/dev-nginx01/cache/ports                    1.34G     0B  1.34G  /var/db/freshports/cache/ports
    data02/freshports/jailed/dev-nginx01/cache/spooling                  224K     0B   120K  /var/db/freshports/cache/spooling
    data02/freshports/jailed/dvl-ingress01                               192K     0B    96K  none
    data02/freshports/jailed/dvl-ingress01/distfiles                      96K     0B    96K  none
    data02/freshports/jailed/dvl-nginx01                                1.56M     0B    96K  none
    data02/freshports/jailed/dvl-nginx01/cache                          1.37M     0B   148K  /var/db/freshports/cache
    data02/freshports/jailed/dvl-nginx01/cache/categories                 96K     0B    96K  /var/db/freshports/cache/categories
    data02/freshports/jailed/dvl-nginx01/cache/commits                    96K     0B    96K  /var/db/freshports/cache/commits
    data02/freshports/jailed/dvl-nginx01/cache/daily                      96K     0B    96K  /var/db/freshports/cache/daily
    data02/freshports/jailed/dvl-nginx01/cache/general                    96K     0B    96K  /var/db/freshports/cache/general
    data02/freshports/jailed/dvl-nginx01/cache/news                      176K     0B    96K  /var/db/freshports/cache/news
    data02/freshports/jailed/dvl-nginx01/cache/packages                   96K     0B    96K  /var/db/freshports/cache/packages
    data02/freshports/jailed/dvl-nginx01/cache/pages                      96K     0B    96K  /var/db/freshports/cache/pages
    data02/freshports/jailed/dvl-nginx01/cache/ports                     208K     0B   128K  /var/db/freshports/cache/ports
    data02/freshports/jailed/dvl-nginx01/cache/spooling                  200K     0B   120K  /var/db/freshports/cache/spooling
    data02/freshports/jailed/dvl-nginx01/freshports                       96K     0B    96K  none
    data02/freshports/jailed/stage-ingress01                             192K     0B    96K  none
    data02/freshports/jailed/stage-ingress01/data                         96K     0B    96K  none
    data02/freshports/jailed/stage-nginx01                              1.62G     0B    96K  none
    data02/freshports/jailed/stage-nginx01/cache                        1.62G     0B   248K  /var/db/freshports/cache
    data02/freshports/jailed/stage-nginx01/cache/categories             2.53M     0B  2.44M  /var/db/freshports/cache/categories
    data02/freshports/jailed/stage-nginx01/cache/commits                  96K     0B    96K  /var/db/freshports/cache/commits
    data02/freshports/jailed/stage-nginx01/cache/daily                  8.57M     0B  8.48M  /var/db/freshports/cache/daily
    data02/freshports/jailed/stage-nginx01/cache/general                5.54M     0B  5.45M  /var/db/freshports/cache/general
    data02/freshports/jailed/stage-nginx01/cache/news                    184K     0B    96K  /var/db/freshports/cache/news
    data02/freshports/jailed/stage-nginx01/cache/packages               13.4M     0B  13.3M  /var/db/freshports/cache/packages
    data02/freshports/jailed/stage-nginx01/cache/pages                    96K     0B    96K  /var/db/freshports/cache/pages
    data02/freshports/jailed/stage-nginx01/cache/ports                  1.59G     0B  1.59G  /var/db/freshports/cache/ports
    data02/freshports/jailed/stage-nginx01/cache/spooling                232K     0B   120K  /var/db/freshports/cache/spooling
    data02/freshports/jailed/test-ingress01                              192K     0B    96K  none
    data02/freshports/jailed/test-ingress01/data                          96K     0B    96K  none
    data02/freshports/jailed/test-nginx01                               1008M     0B    96K  none
    data02/freshports/jailed/test-nginx01/cache                         1008M     0B   236K  /var/db/freshports/cache
    data02/freshports/jailed/test-nginx01/cache/categories              2.32M     0B  2.24M  /var/db/freshports/cache/categories
    data02/freshports/jailed/test-nginx01/cache/commits                   96K     0B    96K  /var/db/freshports/cache/commits
    data02/freshports/jailed/test-nginx01/cache/daily                   12.4M     0B  12.3M  /var/db/freshports/cache/daily
    data02/freshports/jailed/test-nginx01/cache/general                 3.47M     0B  3.36M  /var/db/freshports/cache/general
    data02/freshports/jailed/test-nginx01/cache/news                     184K     0B    96K  /var/db/freshports/cache/news
    data02/freshports/jailed/test-nginx01/cache/packages                4.51M     0B  4.43M  /var/db/freshports/cache/packages
    data02/freshports/jailed/test-nginx01/cache/pages                     96K     0B    96K  /var/db/freshports/cache/pages
    data02/freshports/jailed/test-nginx01/cache/ports                    985M     0B   985M  /var/db/freshports/cache/ports
    data02/freshports/jailed/test-nginx01/cache/spooling                 232K     0B   120K  /var/db/freshports/cache/spooling
    data02/freshports/stage-ingress01                                   19.1G     0B    96K  none
    data02/freshports/stage-ingress01/cache                             2.14M     0B    96K  /jails/stage-ingress01/var/db/freshports/cache
    data02/freshports/stage-ingress01/cache/html                        1.94M     0B  1.86M  /jails/stage-ingress01/var/db/freshports/cache/html
    data02/freshports/stage-ingress01/cache/spooling                     104K     0B   104K  /jails/stage-ingress01/var/db/freshports/cache/spooling
    data02/freshports/stage-ingress01/freshports                        10.7G     0B    96K  none
    data02/freshports/stage-ingress01/freshports/archive                10.7G     0B  10.7G  /jails/stage-ingress01/var/db/freshports/message-queues/archive
    data02/freshports/stage-ingress01/freshports/message-queues         9.96M     0B  7.89M  /jails/stage-ingress01/var/db/freshports/message-queues
    data02/freshports/stage-ingress01/ingress                           5.33G     0B    96K  /jails/stage-ingress01/var/db/ingress
    data02/freshports/stage-ingress01/ingress/latest_commits             404K     0B   100K  /jails/stage-ingress01/var/db/ingress/latest_commits
    data02/freshports/stage-ingress01/ingress/message-queues            1012K     0B   180K  /jails/stage-ingress01/var/db/ingress/message-queues
    data02/freshports/stage-ingress01/ingress/repos                     5.33G     0B  5.31G  /jails/stage-ingress01/var/db/ingress/repos
    data02/freshports/stage-ingress01/jails                              405M     0B   104K  /jails/stage-ingress01/jails
    data02/freshports/stage-ingress01/jails/freshports                   404M     0B   404M  /jails/stage-ingress01/jails/freshports
    data02/freshports/stage-ingress01/ports                             2.63G     0B  2.63G  /jails/stage-ingress01/jails/freshports/usr/ports
    data02/freshports/test-ingress01                                    23.9G     0B    96K  none
    data02/freshports/test-ingress01/freshports                         12.8G     0B  2.05G  /jails/test-ingress01/var/db/freshports
    data02/freshports/test-ingress01/freshports/cache                   2.09M     0B    96K  /jails/test-ingress01/var/db/freshports/cache
    data02/freshports/test-ingress01/freshports/cache/html              1.89M     0B  1.89M  /jails/test-ingress01/var/db/freshports/cache/html
    data02/freshports/test-ingress01/freshports/cache/spooling           104K     0B   104K  /jails/test-ingress01/var/db/freshports/cache/spooling
    data02/freshports/test-ingress01/freshports/message-queues          10.8G     0B  9.07M  /jails/test-ingress01/var/db/freshports/message-queues
    data02/freshports/test-ingress01/freshports/message-queues/archive  10.8G     0B  10.8G  /jails/test-ingress01/var/db/freshports/message-queues/archive
    data02/freshports/test-ingress01/ingress                            8.10G     0B   128K  /jails/test-ingress01/var/db/ingress
    data02/freshports/test-ingress01/ingress/latest_commits              344K     0B   100K  /jails/test-ingress01/var/db/ingress/latest_commits
    data02/freshports/test-ingress01/ingress/message-queues              932K     0B   164K  /jails/test-ingress01/var/db/ingress/message-queues
    data02/freshports/test-ingress01/ingress/repos                      8.10G     0B  5.33G  /jails/test-ingress01/var/db/ingress/repos
    data02/freshports/test-ingress01/jails                              3.00G     0B    96K  /jails/test-ingress01/jails
    data02/freshports/test-ingress01/jails/freshports                   3.00G     0B   405M  /jails/test-ingress01/jails/freshports
    data02/freshports/test-ingress01/jails/freshports/ports             2.60G     0B  2.60G  /jails/test-ingress01/jails/freshports/usr/ports
    data02/jails                                                         363G     0B  9.54G  /jails
    data02/jails/bacula                                                 17.8G     0B  16.3G  /jails/bacula
    data02/jails/bacula-sd-02                                           4.49G     0B  2.91G  /jails/bacula-sd-02
    data02/jails/bacula-sd-03                                           5.62G     0B  4.11G  /jails/bacula-sd-03
    data02/jails/besser                                                 9.29G     0B  5.72G  /jails/besser
    data02/jails/certs                                                  3.93G     0B  2.41G  /jails/certs
    data02/jails/certs-rsync                                            3.91G     0B  2.42G  /jails/certs-rsync
    data02/jails/cliff2                                                 18.3G     0B  18.3G  /jails/cliff2
    data02/jails/dev-ingress01                                          6.07G     0B  3.97G  /jails/dev-ingress01
    data02/jails/dev-nginx01                                            5.02G     0B  3.36G  /jails/dev-nginx01
    data02/jails/dns-hidden-master                                      4.36G     0B  2.64G  /jails/dns-hidden-master
    data02/jails/dns1                                                   12.0G     0B  4.90G  /jails/dns1
    data02/jails/dvl-ingress01                                          10.1G     0B  6.29G  /jails/dvl-ingress01
    data02/jails/dvl-nginx01                                            2.89G     0B  1.28G  /jails/dvl-nginx01
    data02/jails/git                                                    5.94G     0B  4.26G  /jails/git
    data02/jails/jail_within_jail                                       1.44G     0B   585M  /jails/jail_within_jail
    data02/jails/mqtt01                                                 4.68G     0B  3.03G  /jails/mqtt01
    data02/jails/mydev                                                  26.7G     0B  19.8G  /jails/mydev
    data02/jails/mysql01                                                14.7G     0B  5.40G  /jails/mysql01
    data02/jails/mysql02                                                5.75G     0B  7.79G  /jails/mysql02
    data02/jails/nsnotify                                               4.98G     0B  2.60G  /jails/nsnotify
    data02/jails/pg01                                                   50.9G     0B  10.8G  /jails/pg01
    data02/jails/pg02                                                   13.7G     0B  11.0G  /jails/pg02
    data02/jails/pg03                                                   14.5G     0B  10.8G  /jails/pg03
    data02/jails/pkg01                                                  17.9G     0B  13.7G  /jails/pkg01
    data02/jails/samdrucker                                             5.74G     0B  4.16G  /jails/samdrucker
    data02/jails/serpico                                                4.06G     0B  2.50G  /jails/serpico
    data02/jails/stage-ingress01                                        7.25G     0B  3.63G  /jails/stage-ingress01
    data02/jails/stage-nginx01                                          3.02G     0B  1.44G  /jails/stage-nginx01
    data02/jails/svn                                                    11.6G     0B  9.77G  /jails/svn
    data02/jails/talos                                                  3.82G     0B  2.38G  /jails/talos
    data02/jails/test-ingress01                                         3.89G     0B  1.89G  /jails/test-ingress01
    data02/jails/test-nginx01                                           2.99G     0B  1.38G  /jails/test-nginx01
    data02/jails/unifi01                                                32.4G     0B  12.5G  /jails/unifi01
    data02/jails/webserver                                              13.3G     0B  11.3G  /jails/webserver
    data02/reserved                                                      180G   180G    96K  none
    data02/vm                                                           62.0G     0B  6.41G  /usr/local/vm
    data02/vm/freebsd-test                                               701M     0B   112K  /usr/local/vm/freebsd-test
    data02/vm/freebsd-test/disk0                                         700M     0B   700M  -
    data02/vm/hass                                                      52.0G     0B  13.4G  /usr/local/vm/hass
    data02/vm/home-assistant                                             351M     0B   351M  /usr/local/vm/home-assistant
    data02/vm/myguest                                                   2.55G     0B  2.55G  /usr/local/vm/myguest
    

    After putting that into a gist and pastiing it to the channle, I noticed the highlighted line 180 above.

    Then I remembered. Some time ago I’d create this dataset to stop the zpool from filling up. I had read about this approach, and tried it.

    I freed up space by deleting that /tmp/delete=me file (I didn’t intend that to be an =).

    The next day, I ran this command to clear the reservation and free up that “free space”.

    [13:23 r730-01 dvl ~] % sudo zfs set refreservation=0 data02/reserved
    [13:24 r730-01 dvl ~] % 
    

    Monitoring

    What has failed me is my monitoring. The zpool was getting full, but not in this sense:

    [14:20 r730-01 dvl ~] % zpool list data02
    NAME     SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
    data02   928G   705G   223G        -         -    67%    75%  1.00x    ONLINE  -
    

    I’m sure the default monitoring on this zpool is checking the CAP (capacity) – I may look at adjusting the monitoring.

    Cleaning up old snapshots

    I need to move most of data02 into data04.

    [14:24 r730-01 dvl ~] % zpool list
    NAME     SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
    data01  5.81T  6.36G  5.81T        -         -     2%     0%  1.00x    ONLINE  -
    data02   928G   618G   310G        -         -    56%    66%  1.00x    ONLINE  -
    data03  7.25T  1.27T  5.98T        -         -    48%    17%  1.00x    ONLINE  -
    data04  29.1T  6.11T  23.0T        -         -     0%    21%  1.00x    ONLINE  -
    zroot    107G  48.2G  58.8G        -         -    53%    45%  1.00x    ONLINE  -
    

    In the short term, I’ll buy some time by deleting old snapshots. Look at all this old stuff:

    [14:31 r730-01 dvl ~] % zfs list -o name -r -t snapshot data02 | grep  @mkjail 
    data02/jails/bacula@mkjail-202509051453
    data02/jails/bacula@mkjail-202602101338
    data02/jails/bacula-sd-02@mkjail-202509051453
    data02/jails/bacula-sd-02@mkjail-202602101338
    data02/jails/bacula-sd-03@mkjail-202509051453
    data02/jails/bacula-sd-03@mkjail-202602101338
    data02/jails/besser@mkjail-202602101619
    data02/jails/certs@mkjail-202509051453
    data02/jails/certs@mkjail-202602101338
    data02/jails/certs-rsync@mkjail-202509051453
    data02/jails/certs-rsync@mkjail-202602101338
    data02/jails/dev-ingress01@mkjail-202509051453
    data02/jails/dev-ingress01@mkjail-202602101338
    data02/jails/dev-nginx01@mkjail-202509051453
    data02/jails/dev-nginx01@mkjail-202602101338
    data02/jails/dns-hidden-master@mkjail-202509051453
    data02/jails/dns-hidden-master@mkjail-202602101513
    data02/jails/dns1@mkjail-202509051453
    data02/jails/dns1@mkjail-202602101242
    data02/jails/dvl-ingress01@mkjail-202509051453
    data02/jails/dvl-ingress01@mkjail-202602101338
    data02/jails/dvl-nginx01@mkjail-202509051453
    data02/jails/dvl-nginx01@mkjail-202602101338
    data02/jails/git@mkjail-202509051453
    data02/jails/git@mkjail-202602101338
    data02/jails/jail_within_jail@mkjail-202509051453
    data02/jails/jail_within_jail@mkjail-202602101609
    data02/jails/mqtt01@mkjail-202509051453
    data02/jails/mqtt01@mkjail-202602101338
    data02/jails/mydev@mkjail-202509051453
    data02/jails/mydev@mkjail-202602101529
    data02/jails/mysql01@mkjail-202509051453
    data02/jails/mysql01@mkjail-202602101242
    data02/jails/mysql02@mkjail-202602102333
    data02/jails/nsnotify@mkjail-202509051453
    data02/jails/nsnotify@mkjail-202602101551
    data02/jails/nsnotify@mkjail-202602101558
    data02/jails/pg01@mkjail-202509051453
    data02/jails/pg01@mkjail-202602101338
    data02/jails/pg02@mkjail-202509051453
    data02/jails/pg02@mkjail-202602101338
    data02/jails/pg03@mkjail-202509051453
    data02/jails/pg03@mkjail-202602101338
    data02/jails/pkg01@mkjail-202509051453
    data02/jails/pkg01@mkjail-202602082233
    data02/jails/samdrucker@mkjail-202509051453
    data02/jails/samdrucker@mkjail-202602101338
    data02/jails/serpico@mkjail-202509051453
    data02/jails/serpico@mkjail-202602101536
    data02/jails/stage-ingress01@mkjail-202509051453
    data02/jails/stage-ingress01@mkjail-202602101338
    data02/jails/stage-nginx01@mkjail-202509051453
    data02/jails/stage-nginx01@mkjail-202602101338
    data02/jails/svn@mkjail-202509051453
    data02/jails/svn@mkjail-202602101338
    data02/jails/talos@mkjail-202509051453
    data02/jails/talos@mkjail-202602101338
    data02/jails/test-ingress01@mkjail-202509051453
    data02/jails/test-ingress01@mkjail-202602101338
    data02/jails/test-nginx01@mkjail-202509051453
    data02/jails/test-nginx01@mkjail-202602101338
    data02/jails/unifi01@mkjail-202509051453
    data02/jails/unifi01@mkjail-202602101605
    data02/jails/webserver@mkjail-202509051453
    data02/jails/webserver@mkjail-202602101338
    

    Quickly scripted:

    [14:32 r730-01 dvl ~] % zfs list -o name -r -t snapshot data02 | grep  @mkjail | xargs -n 1 sudo zfs destroy 
    [14:32 r730-01 dvl ~] % zpool list data02
    NAME     SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
    data02   928G   618G   310G        -         -    56%    66%  1.00x    ONLINE  -
    

    But there’s more, that was just mkjail related.

    [14:42 r730-01 dvl ~] % zfs list -o name -r -t snapshot data02 | grep -v @autosnap | grep -v @empty
    NAME
    data02/freshports/dvl-ingress01/ingress/repos/doc@for-dvl-ingress01
    data02/freshports/dvl-ingress01/ingress/repos/ports@for-dvl-ingress01
    data02/freshports/dvl-ingress01/ingress/repos/src@for-dvl-ingress01
    data02/jails/besser@before.26.1.0
    data02/jails/besser@before.26.1.1
    data02/jails/besser@before.26.1.1_1
    data02/jails/besser@before.26.1.1_2
    data02/jails/besser@before.26.2.0
    data02/jails/dns1@before.bind918
    data02/jails/dns1@bind.9.18
    data02/jails/mysql01@MySQL-8.0
    data02/jails/mysql01@mysql80-part2
    data02/jails/mysql01@mysql80-part3
    data02/jails/pkg01@before15.0
    data02/jails/unifi01@before.mongodb60-6.0.24
    

    @autosnap is sanoid-related. @empty is on some special FreshPorts filesystems. The reset look good to go.

    Let’s try a dry run, and notice we don’t need sudo for that.

    [14:49 r730-01 dvl ~] % zfs list -o name -r -t snapshot data02 | grep -v @autosnap | grep -v @empty | xargs -n 1 zfs destroy -n 
    cannot open 'NAME': dataset does not exist
    cannot destroy 'data02/jails/mysql01@mysql80-part3': snapshot has dependent clones
    use '-R' to destroy the following datasets:
    data02/jails/mysql02@autosnap_2026-02-13_00:00:01_daily
    data02/jails/mysql02@autosnap_2026-02-14_00:00:11_daily
    data02/jails/mysql02@autosnap_2026-02-15_00:00:01_daily
    data02/jails/mysql02@autosnap_2026-02-16_00:00:02_daily
    data02/jails/mysql02@autosnap_2026-02-17_00:00:00_daily
    data02/jails/mysql02@autosnap_2026-02-18_00:00:06_daily
    data02/jails/mysql02@autosnap_2026-02-19_00:00:03_daily
    data02/jails/mysql02@autosnap_2026-02-19_08:00:04_hourly
    data02/jails/mysql02@autosnap_2026-02-19_09:00:01_hourly
    data02/jails/mysql02@autosnap_2026-02-19_10:00:02_hourly
    data02/jails/mysql02@autosnap_2026-02-19_10:00:02_frequently
    data02/jails/mysql02@autosnap_2026-02-19_10:15:09_frequently
    data02/jails/mysql02@autosnap_2026-02-19_10:30:02_frequently
    data02/jails/mysql02@autosnap_2026-02-19_10:45:09_frequently
    data02/jails/mysql02@autosnap_2026-02-19_11:00:01_hourly
    data02/jails/mysql02@autosnap_2026-02-19_11:00:01_frequently
    data02/jails/mysql02@autosnap_2026-02-19_11:15:08_frequently
    data02/jails/mysql02@autosnap_2026-02-19_11:30:00_frequently
    data02/jails/mysql02@autosnap_2026-02-19_11:45:08_frequently
    data02/jails/mysql02@autosnap_2026-02-19_12:00:01_hourly
    data02/jails/mysql02@autosnap_2026-02-19_12:00:01_frequently
    data02/jails/mysql02@autosnap_2026-02-19_12:15:08_frequently
    data02/jails/mysql02@autosnap_2026-02-19_12:30:01_frequently
    data02/jails/mysql02@autosnap_2026-02-19_12:45:09_frequently
    data02/jails/mysql02@autosnap_2026-02-19_13:00:02_hourly
    data02/jails/mysql02@autosnap_2026-02-19_13:00:02_frequently
    data02/jails/mysql02@autosnap_2026-02-19_13:15:09_frequently
    data02/jails/mysql02@autosnap_2026-02-19_13:30:01_frequently
    data02/jails/mysql02@autosnap_2026-02-19_13:45:08_frequently
    data02/jails/mysql02@autosnap_2026-02-19_14:00:03_hourly
    data02/jails/mysql02@autosnap_2026-02-19_14:00:03_frequently
    data02/jails/mysql02@autosnap_2026-02-19_14:15:09_frequently
    data02/jails/mysql02@autosnap_2026-02-19_14:30:00_frequently
    data02/jails/mysql02@autosnap_2026-02-19_14:45:10_frequently
    data02/jails/mysql02
    

    Two things:

    • Add -H to remove that NAME header
    • don’t touch data02/jails/mysql01@mysql80-part3 yet
    [14:50 r730-01 dvl ~] % zfs list -Ho name -r -t snapshot data02 | grep -v @autosnap | grep -v @empty | grep -v mysql80-part3 | xargs -n 1 zfs destroy -n 
    [14:51 r730-01 dvl ~] % 
    

    That looks good. This time, for real (with sudo and without -n)

    [14:51 r730-01 dvl ~] % zfs list -Ho name -r -t snapshot data02 | grep -v @autosnap | grep -v @empty | grep -v mysql80-part3 | xargs -n 1 sudo zfs destroy
    [14:54 r730-01 dvl ~] % zpool list data02                                                                                                                 
    NAME     SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
    data02   928G   591G   337G        -         -    53%    63%  1.00x    ONLINE  -
    

    So what?

    Yeah, this was my fault. As most things computing-related are. It’s never the system’s fault.

    I am grateful that although some things stopped working, nothing seems corrupted. I don’t mean from a bitrot/checksum point of view (with respect to ZFS). I mean the MySQL database is not corrupted, the PostgreSQL databases are fine (as expected), none of the FreshPorts nodes were affected at all (they kept processing incoming commits).

    The Postfix issue seems to have been the canary. I suspect it won’t accept an incoming message unless it can allocate a certain amount of space. It could not. It coughed. If I had not had a reservation on data02/reserved, usage would have grown to 80% when monitoring would have triggered.

    I think my reservation has to be less than 20% of the pool size, let’s save 15% .15 * 928 = 139.2

    Let’s this this idea.

    Get the pool to 80%

    Let’s trigger the monitoring alert.

    [17:58 r730-01 dvl ~] % zpool list data02
    NAME     SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
    data02   928G   591G   337G        -         -    53%    63%  1.00x    ONLINE  -
    

    How full is that? .8 * 928 = 742.4 which is 151.4 more than the 591 now in use (that’s all in GB).

    Let’s write to a file and take up some space.

    [18:17 cliff2 dvl ~] % sudo dd if=/dev/random of=/tmp/delete-me bs=1M count=151400 status=progress
      158554128384 bytes (159 GB, 148 GiB) transferred 622.025s, 255 MB/s
    151400+0 records in
    151400+0 records out
    158754406400 bytes transferred in 622.731688 secs (254932276 bytes/sec)
    

    Close, oh, so close!

    [18:25 r730-01 dvl ~] % zpool list data02
    NAME     SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
    data02   928G   739G   189G        -         -    67%    79%  1.00x    ONLINE  -
    

    Let’s do another 5G to another file:

    [18:27 cliff2 dvl ~] % sudo dd if=/dev/random of=/tmp/delete-me-2 bs=1M count=5000 status=progress
      5109710848 bytes (5110 MB, 4873 MiB) transferred 17.001s, 301 MB/s
    5000+0 records in
    5000+0 records out
    5242880000 bytes transferred in 17.443215 secs (300568443 bytes/sec)
    [18:31 r730-01 dvl ~] % zpool list data02
    NAME     SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
    data02   928G   743G   185G        -         -    68%    80%  1.00x    ONLINE  -
    

    There we go! Target acquired.

    Good, and the monitoring is triggered (after I manually scheduled a check): ZFS POOL ALARM: POOL data02 usage is WARNING (80%)

    So let’s go with a reservation of 130G, and take note of the AVAIL value before and after.

    [18:35 r730-01 dvl ~] % zfs list data02                              
    NAME     USED  AVAIL  REFER  MOUNTPOINT
    data02   744G   155G    96K  none
    [18:35 r730-01 dvl ~] % sudo zfs set refreservation=130G data02/reserved
    [18:35 r730-01 dvl ~] % zfs list data02                                 
    NAME     USED  AVAIL  REFER  MOUNTPOINT
    data02   874G  25.1G    96K  none
    [18:36 r730-01 dvl ~] % 
    

    Let’s delete my test files:

    [18:34 r730-01 dvl ~] % zfs list data02
    NAME     USED  AVAIL  REFER  MOUNTPOINT
    data02   744G   155G    96K  none
    

    It says 155G available.

    [18:37 cliff2 dvl ~] % sudo rm -rf /tmp/delete-me*
    [18:37 cliff2 dvl ~] % 
    

    And now we have:

    [18:37 r730-01 dvl ~] % zfs list data02    
    NAME     USED  AVAIL  REFER  MOUNTPOINT
    data02   869G  30.0G    96K  none
    

    Pretty much the same. Guess why?

    Snapshots.

    Or in this case, one snapshot:

    [18:39 r730-01 dvl ~] % zfs list -r -t snapshot data02/jails/cliff2 | tail -4
    data02/jails/cliff2@autosnap_2026-02-19_18:00:06_hourly         0B      -  2.46G  -
    data02/jails/cliff2@autosnap_2026-02-19_18:00:06_frequently     0B      -  2.46G  -
    data02/jails/cliff2@autosnap_2026-02-19_18:15:30_frequently   376K      -  2.46G  -
    data02/jails/cliff2@autosnap_2026-02-19_18:30:02_frequently   148G      -   150G  -
    

    Let’s sacrifice that one:

    [18:39 r730-01 dvl ~] % sudo zfs destroy data02/jails/cliff2@autosnap_2026-02-19_18:30:02_frequently
    [18:40 r730-01 dvl ~] % 
    
    [18:40 r730-01 dvl ~] % zfs list data02
    NAME     USED  AVAIL  REFER  MOUNTPOINT
    data02   721G   178G    96K  none
    
    [18:40 r730-01 dvl ~] % zpool list data02
    NAME     SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
    data02   928G   591G   337G        -         -    52%    63%  1.00x    ONLINE  -
    

    There, that’s better.

    Overview

    My monitoring didn’t account for the reservation. My zpool would (and did) fill up before triggering any monitoring.

    I adjusted my reservation size based on existing monitoring.

    If I recall correctly, the post I read about reservation was based on: if the zpool fills up, this saves you some space, so you notice the problem, you solve the problem, thereby allowing the system to continue running. The theory being that the reservation is just stopping some rogue process/jail from completely filling up the zpool – it’s easier to manipulate with a mostly full zpool than a 100% zpool.

    Yes, you could set a quota to restrict certain jails from exploding, but that seems like more like a user-based tool (don’t let user foo use more than 100GB in their home dir).

    Reading elsewhere, I think the primary use of reservations is to reserve future space not yet used. Or “A ZFS reservation is an allocation of disk space from the pool that is guaranteed to be available to a dataset”. Or in other words, always ensure that user foo has 150GB available from the pool).

    In short: I’m not convinced reservation or quota is useful to me in this situation. However, I’ll keep the reservation on data02/reserved for now.

    Thanks.

Top

FreeBSD AMI ID pages

Post by Colin Percival via Daemonic Dispatches »

In 2021 I started publishing FreeBSD AMI IDs in the SSM Parameter Store; this made it much easier for scripts to launch EC2 instances without needing to have hard-coded AMI IDs. (Indeed, I use this extensively in my regression testing, to launch e.g. the most recent 14.4-STABLE image.) While this is useful for scripts, it's not so useful for humans.

Top

Post by FreeBSD Newsflash via FreeBSD News Flash »

New committer: Yusuf Yaman (ports)
Top

Post by FreeBSD Newsflash via FreeBSD News Flash »

New committer: Kousuke Kannagi (ports)
Top

FreeBSD/Poudriere in High Security Proxy Environments

Post by Vermaden via 𝚟𝚎𝚛𝚖𝚊𝚍𝚎𝚗 »

Most of the time FreeBSD systems are used with wide open connection to the Internet along with fully working DNS that resolves anything the Root Servers resolve … but FreeBSD – besides being used as SONY PlayStation gaming systems or Netflix storage layer.

Its also used in high security environments without any external DNS access or direct Internet connection to the outside World … yet the security patches are fetched and applied and custom PKGBASE and/or Poudriere systems build base system/packages while fetching them from the Internet over some dedicated proxy.

Many people will not read entire article so I will point that in the beginning – that I am really grateful to Mariusz Zaborski (oshogbo) for his help with this one – without his help – it just would not happen.

By default FreeBSD does not work well in such environments … in this article we will configure FreeBSD to make everything work as needed.

The Table of Contents is as follows.

  • FreeBSD and Poudriere in High Security Environments
  • Example Proxy Configuration
  • Physical (or Virtual) FreeBSD Host
    • pkg(8)
    • FreeBSD Ports Tree
    • Proxy on the Fly
    • Back to the PKGBASE
  • Poudriere in Proxy World
    • Basic Poudriere Setup
    • Important Poudriere Config Part

Example Proxy Configuration

For completeness I will add Squid configuration used here – so that all information will be available.

proxy # grep '^[^#]' /usr/local/etc/squid/squid.conf
http_port 127.0.0.1:3128
http_port 10.0.0.41:3128
acl SSL_ports port 443
acl Safe_ports port 80          # http
acl Safe_ports port 21          # ftp
acl Safe_ports port 443         # https
acl Safe_ports port 70          # gopher
acl Safe_ports port 210         # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280         # http-mgmt
acl Safe_ports port 488         # gss-http
acl Safe_ports port 591         # filemaker
acl Safe_ports port 777         # multiling http
acl CONNECT method CONNECT
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost manager
http_access deny manager
acl localnet src 127.0.0.1
acl localnet src 10.0.0.0/8     # RFC 1918 local private network (LAN)
acl localnet src 172.16.0.0/12  # RFC 1918 local private network (LAN)
acl localnet src 192.168.0.0/16 # RFC 1918 local private network (LAN)
http_access allow localnet
http_access allow localhost
http_access deny all
visible_hostname proxy.xyz
acl custom-local dstdomain .custom.xyz
cache_peer 10.0.0.42 parent 3128 0 no-query default name=weathertop
cache_peer_domain 10.0.0.43 !.xyz
never_direct deny custom-local
never_direct allow all
cache_dir ufs /var/local/squid/cache 50 16 256
coredump_dir /var/local/squid/cache
access_log stdio:/var/local/log/squid/access.log
cache_store_log stdio:/var/local/log/squid/store.log
cache_log /var/local/log/squid/cache.log
refresh_pattern ^ftp:          1440 20% 10080
refresh_pattern ^gopher:       1440  0%  1440
refresh_pattern -i (/cgi-bin/|\?) 0  0%     0
refresh_pattern .                 0 20%  4320
email_err_data off

Now we will configure a FreeBSD host to use it properly.

Physical (or Virtual) FreeBSD Host

For a start we will make pkg(8) work with our proxy system.

pkg(8)

I installed that FreeBSD 15.0-RELEASE system with PKGBASE way – Brave New PKGBASE World – described in details here. With offline install using PKGBASE packages from the install disc1.iso medium.

That means the pkg(8) is already bootstrapped … we will turn that ‘OFF’ for a moment.

test # pkg info
FreeBSD-acct-15.0              System resource accounting
FreeBSD-acpi-15.0              Advanced Configuration and Power Interface (ACPI) utilities
FreeBSD-apm-15.0               Intel / Microsoft APM BIOS utility
(...)
FreeBSD-zlib-lib32-15.0        DEFLATE (gzip) data compression library (32-bit libraries)
FreeBSD-zoneinfo-15.0          Timezone database
pkg-2.4.2                      Package manager

test # mv /var/db/pkg /var/db/pkg.BCK
test # mv /usr/local  /usr/local.BCK

Now if you would like to bootstrap pkg(8) it will fail.

test # pkg
The package management tool is not yet installed on your system.
Do you want to fetch and install it now? [y/N]: y
Bootstrapping pkg from pkg+https://pkg.FreeBSD.org/FreeBSD:15:amd64/quarterly, please wait...
pkg: Error fetching https://pkg.FreeBSD.org/FreeBSD:15:amd64/quarterly/Latest/pkg.pkg: Transient resolver failure
A pre-built version of pkg could not be found for your system.
Bootstrapping pkg from pkg+https://pkg.FreeBSD.org/FreeBSD:15:amd64/kmods_quarterly_0, please wait...
pkg: Error fetching https://pkg.FreeBSD.org/FreeBSD:15:amd64/kmods_quarterly_0/Latest/pkg.pkg: Transient resolver failure
A pre-built version of pkg could not be found for your system.

Now we will export(1) needed proxy setting into environment.

test # export HTTP_PROXY="http://10.0.0.41:3128" 

test # export HTTPS_PROXY="http://10.0.0.41:3128"

test # export FTP_PROXY="http://10.0.0.41:3128"

test # env | grep -i proxy
HTTP_PROXY=http://10.0.0.41:3128
HTTPS_PROXY=http://10.0.0.41:3128
FTP_PROXY=http://10.0.0.41:3128

If you want to make it permanent for the default sh(1) shell then do this.

test # cat << EOF > /etc/profile.d/proxy.sh
export HTTP_PROXY=http://10.0.0.41:3128
export HTTPS_PROXY=http://10.0.0.41:3128
export FTP_PROXY=http://10.0.0.41:3128
EOF

Now with each new login these proxy settings will be available.

Lets try with pkg(8) again.

test # pkg info
The package management tool is not yet installed on your system.
Do you want to fetch and install it now? [y/N]: y
Bootstrapping pkg from pkg+https://pkg.FreeBSD.org/FreeBSD:15:amd64/quarterly, please wait...
Verifying signature with trusted certificate pkg.freebsd.org.2013102301... done
Installing pkg-2.4.2...
Extracting pkg-2.4.2: 100%
pkg-2.4.2                      Package manager

The pkg(8) has now bootstrap completed and will work, right? Right?

test # pkg update
Updating FreeBSD-ports repository catalogue...
pkg: No SRV record found for the repo 'FreeBSD-ports'
Fetching meta.conf: 100%    179 B   0.2kB/s    00:01    
pkg: packagesite URL error for pkg+https://pkg.FreeBSD.org/FreeBSD:15:amd64/quarterly/data.pkg -- pkg+:// implies SRV mirror type
pkg: packagesite URL error for pkg+https://pkg.FreeBSD.org/FreeBSD:15:amd64/quarterly/data.tzst -- pkg+:// implies SRV mirror type
pkg: packagesite URL error for pkg+https://pkg.FreeBSD.org/FreeBSD:15:amd64/quarterly/packagesite.pkg -- pkg+:// implies SRV mirror type
pkg: packagesite URL error for pkg+https://pkg.FreeBSD.org/FreeBSD:15:amd64/quarterly/packagesite.tzst -- pkg+:// implies SRV mirror type
Unable to update repository FreeBSD-ports
Updating FreeBSD-ports-kmods repository catalogue...
pkg: No SRV record found for the repo 'FreeBSD-ports-kmods'
Fetching meta.conf: 100%    179 B   0.2kB/s    00:01    
pkg: packagesite URL error for pkg+https://pkg.FreeBSD.org/FreeBSD:15:amd64/kmods_quarterly_0/data.pkg -- pkg+:// implies SRV mirror type
pkg: packagesite URL error for pkg+https://pkg.FreeBSD.org/FreeBSD:15:amd64/kmods_quarterly_0/data.tzst -- pkg+:// implies SRV mirror type
pkg: packagesite URL error for pkg+https://pkg.FreeBSD.org/FreeBSD:15:amd64/kmods_quarterly_0/packagesite.pkg -- pkg+:// implies SRV mirror type
pkg: packagesite URL error for pkg+https://pkg.FreeBSD.org/FreeBSD:15:amd64/kmods_quarterly_0/packagesite.tzst -- pkg+:// implies SRV mirror type
Unable to update repository FreeBSD-ports-kmods
Error updating repositories!

Small modification is needed. One need to remove pkg+ prefix from all url: paths and to switch mirror_type: from srv to none. Then it will work.

Now … the reason why FreeBSD uses the pkg+ prefix is this:

  • pkg+https:// tells pkg(8) to use libpkg internal HTTP/HTTPS fetcher.
  • https:// tells pkg(8) to use external fetcher – usually with FreeBSD fetch(1) tool.

Now … the needed changes.

test # \
  grep '^[^#]' /etc/pkg/FreeBSD.conf \
    | sed -e 's.pkg+..g' \
          -e 's."srv"."none".g' \
          -e 's.enabled: no.enabled: yes.g' \
    > /root/FreeBSD.conf 

The diff(1) for that change is below.

root # diff -u /etc/pkg/FreeBSD.conf /root/FreeBSD.conf 
--- /etc/pkg/FreeBSD.conf  2025-11-28 00:00:00.000000000 +0000
+++ /root/FreeBSD.conf     2026-01-07 00:11:41.534051000 +0000
@@ -10,23 +10,23 @@
 #
 
 FreeBSD-ports: {
-  url: "pkg+https://pkg.FreeBSD.org/${ABI}/quarterly",
-  mirror_type: "srv",
+  url: "https://pkg.FreeBSD.org/${ABI}/quarterly",
+  mirror_type: "none",
   signature_type: "fingerprints",
   fingerprints: "/usr/share/keys/pkg",
   enabled: yes
 }
 FreeBSD-ports-kmods: {
-  url: "pkg+https://pkg.FreeBSD.org/${ABI}/kmods_quarterly_${VERSION_MINOR}",
-  mirror_type: "srv",
+  url: "https://pkg.FreeBSD.org/${ABI}/kmods_quarterly_${VERSION_MINOR}",
+  mirror_type: "none",
   signature_type: "fingerprints",
   fingerprints: "/usr/share/keys/pkg",
   enabled: yes
 }
 FreeBSD-base: {
-  url: "pkg+https://pkg.FreeBSD.org/${ABI}/base_release_${VERSION_MINOR}",
-  mirror_type: "srv",
+  url: "https://pkg.FreeBSD.org/${ABI}/base_release_${VERSION_MINOR}",
+  mirror_type: "none",
   signature_type: "fingerprints",
   fingerprints: "/usr/share/keys/pkgbase-${VERSION_MAJOR}",
-  enabled: no
+  enabled: yes
 }

Now – lets leave the original /etc/pkg/FreeBSD.conf unmodified and create /usr/local/etc/pkg/repos/FreeBSD.conf that will override the defaults.

test # mkdir -pv /usr/local/etc/pkg/repos
/usr/local/etc/pkg
/usr/local/etc/pkg/repos

test # cp /root/FreeBSD.conf /usr/local/etc/pkg/repos/

Now that pkg(8) should work well over proxy.

test # pkg update 
Updating FreeBSD-ports repository catalogue...
Fetching meta.conf: 100%    179 B   0.2kB/s    00:01    
Fetching data.pkg: 100%   10 MiB   3.6MB/s    00:03    
Processing entries: 100%
FreeBSD-ports repository update completed. 36390 packages processed.
Updating FreeBSD-ports-kmods repository catalogue...
Fetching meta.conf: 100%    179 B   0.2kB/s    00:01    
Fetching data.pkg: 100%   31 KiB  31.3kB/s    00:01    
Processing entries: 100%
FreeBSD-ports-kmods repository update completed. 204 packages processed.
Updating FreeBSD-base repository catalogue...
Fetching meta.conf: 100%    179 B   0.2kB/s    00:01    
Fetching data.pkg: 100%   80 KiB  81.5kB/s    00:01    
Processing entries: 100%
FreeBSD-base repository update completed. 496 packages processed.

Lets try to actually install any software.

test # pkg install lsblk beadm 
Updating FreeBSD-ports repository catalogue...
FreeBSD-ports repository is up to date.
Updating FreeBSD-ports-kmods repository catalogue...
FreeBSD-ports-kmods repository is up to date.
Updating FreeBSD-base repository catalogue...
FreeBSD-base repository is up to date.
All repositories are up to date.
The following 2 package(s) will be affected (of 0 checked):

New packages to be INSTALLED:
        beadm: 1.3.5_1 [FreeBSD-ports]
        lsblk: 4.0 [FreeBSD-ports]
        gitup: 1.0 [FreeBSD-ports]

Number of packages to be installed: 3

18 KiB to be downloaded.

Proceed with this action? [y/N]: y
[1/3] Fetching lsblk-4.0~3110a4bb46.pkg: 100%    7 KiB   7.2kB/s    00:01    
[2/3] Fetching beadm-1.3.5_1~53f06720d4.pkg: 100%   11 KiB  11.0kB/s    00:01    
[3/3] Fetching gitup-1.0~2c88a1f1f1.pkg: 100%   36 KiB  37.0kB/s    00:01    
Checking integrity... done (0 conflicting)
[1/3] Installing beadm-1.3.5_1...
[1/3] Extracting beadm-1.3.5_1: 100%
[2/3] Installing lsblk-4.0...
[2/3] Extracting lsblk-4.0: 100%
[3/3] Installing gitup-1.0...
[3/3] Extracting gitup-1.0: 100%

test # lsblk -d
DEVICE SIZE MODEL
nda0    10G bhyve-NVMe
-       10G TOTAL SYSTEM STORAGE

Works.

If for some reason the above method will not work – you may also configure proxy server within the pkg(8) config /usr/local/etc/pkg.conf file.

test # tail -5 /usr/local/etc/pkg.conf
PKG_ENV {
  HTTP_PROXY: "http://10.0.0.41:3128"
  HTTPS_PROXY: "https://10.0.0.41:3128"
  FTP_PROXY: "http://10.0.0.41:3128"
}

These settings will also override any ‘system’ settings we set previously in the /etc/profile.d/proxy.sh file.

FreeBSD Ports Tree

We already installed gitup tool that will allow us to update the FreeBSD Ports tree easily.

Its default config is more then enought.

test # grep -m 1 -A 6 ports /usr/local/etc/gitup.conf
        "ports" : {
                "repository_path"  : "/ports.git",
                "branch"           : "main",
                "target_directory" : "/usr/ports",
                "ignores"          : [],
        },

Lets try fetching the FreeBSD Ports tree now.

test # gitup ports
# Host: git.freebsd.org
# Port: 443
# Proxy Host: 10.0.0.41
# Proxy Port: 3128
# Repository Path: /ports.git
# Target Directory: /usr/ports
# Want: 284813ec0382a2bfe5b2e74a3081a67599d3155d
# Branch: main
# Action: clone
  75 MB in 0m25s, 4614 kB/s now 
 + /usr/ports/.arcconfig
 + /usr/ports/.gitignore
 + /usr/ports/.hooks/pre-commit
(...)
 + /usr/ports/x11/zutty/Makefile
 + /usr/ports/x11/zutty/distinfo
 + /usr/ports/x11/zutty/pkg-descr
#
# Please review the following file(s) for important changes.
#       /usr/ports/UPDATING
#       /usr/ports/mail/dspam/files/UPDATING
#
# Done.

Seems to work.

To now update the FreeBSD Ports tree run gitup(1) command again.

test # gitup ports
# Scanning local repository...
# Host: git.freebsd.org
# Port: 443
# Proxy Host: 10.0.0.41
# Proxy Port: 3128
# Repository Path: /ports.git
# Target Directory: /usr/ports
# Have: 284813ec0382a2bfe5b2e74a3081a67599d3155d
# Want: 7b2f3c4f484b1634066997a91836554608c72c48
# Branch: main
# Action: pull
 * /usr/ports/lang/spidermonkey115/Makefile
 * /usr/ports/lang/spidermonkey115/distinfo
 * /usr/ports/lang/spidermonkey140/Makefile
 * /usr/ports/lang/spidermonkey140/distinfo
# Done.

If for some reason You will find gitup(1) or git(1) does not work – you may always configure system wide proxy as follows.

test # git config --system http.proxy http://10.0.0.41:3128

That would help.

You may also find yourself in a position that yarn(1) or npm(1) require a proxy – here is syntax for them.

test # yarn config set https-proxy http://10.0.0.41:3128

test # npm --https-proxy=http://10.0.0.41:3128 install package

Proxy on the Fly

If for some reason You will need to force the proxy settings for a single command – then use something like that below.

test # \
  env HTTP_PROXY="http://10.0.0.41:3128"  \
      HTTPS_PROXY="http://10.0.0.41:3128" \
      FTP_PROXY="http://10.0.0.41:3128" 
    command ...

Back to the PKGBASE

Now – lets bring back our original pkg(8) config – we may ‘keep’ the current ‘test’ bootstrap if needed with .CUSTOM suffix.

test # mv /usr/local  /usr/local.CUSTOM
test # mv /var/db/pkg /var/db/pkg.CUSTOM

test # mv /usr/local.BCK  /usr/local
test # mv /var/db/pkg.BCK /var/db/pkg

Now the ‘original’ pkg(8) config that keeps PKGBASE information works again.

test # pkg info | (head -3 ;echo '(...)'; tail -3)
FreeBSD-acct-15.0              System resource accounting
FreeBSD-acpi-15.0              Advanced Configuration and Power Interface (ACPI) utilities
FreeBSD-apm-15.0               Intel / Microsoft APM BIOS utility
(...)
FreeBSD-zlib-lib32-15.0        DEFLATE (gzip) data compression library (32-bit libraries)
FreeBSD-zoneinfo-15.0          Timezone database
pkg-2.4.2                      Package manager

But when we will now try to update the pkg(8) repositories it will fail … why?

test # pkg update
Updating FreeBSD-ports repository catalogue...
pkg: No SRV record found for the repo 'FreeBSD-ports'
Fetching meta.conf: 100%    179 B   0.2kB/s    00:01    
pkg: packagesite URL error for pkg+https://pkg.FreeBSD.org/FreeBSD:15:amd64/quarterly/data.pkg -- pkg+:// implies SRV mirror type
pkg: packagesite URL error for pkg+https://pkg.FreeBSD.org/FreeBSD:15:amd64/quarterly/data.tzst -- pkg+:// implies SRV mirror type
pkg: packagesite URL error for pkg+https://pkg.FreeBSD.org/FreeBSD:15:amd64/quarterly/packagesite.pkg -- pkg+:// implies SRV mirror type
pkg: packagesite URL error for pkg+https://pkg.FreeBSD.org/FreeBSD:15:amd64/quarterly/packagesite.tzst -- pkg+:// implies SRV mirror type
Unable to update repository FreeBSD-ports
Updating FreeBSD-ports-kmods repository catalogue...
pkg: No SRV record found for the repo 'FreeBSD-ports-kmods'
Fetching meta.conf: 100%    179 B   0.2kB/s    00:01    
pkg: packagesite URL error for pkg+https://pkg.FreeBSD.org/FreeBSD:15:amd64/kmods_quarterly_0/data.pkg -- pkg+:// implies SRV mirror type
pkg: packagesite URL error for pkg+https://pkg.FreeBSD.org/FreeBSD:15:amd64/kmods_quarterly_0/data.tzst -- pkg+:// implies SRV mirror type
pkg: packagesite URL error for pkg+https://pkg.FreeBSD.org/FreeBSD:15:amd64/kmods_quarterly_0/packagesite.pkg -- pkg+:// implies SRV mirror type
pkg: packagesite URL error for pkg+https://pkg.FreeBSD.org/FreeBSD:15:amd64/kmods_quarterly_0/packagesite.tzst -- pkg+:// implies SRV mirror type
Unable to update repository FreeBSD-ports-kmods
Error updating repositories!

Its because our config override was placed in /usr/local path … and we just wiped that away.

Copy working proxy config again then.

test # mkdir -pv /usr/local/etc/pkg/repos
/usr/local/etc/pkg
/usr/local/etc/pkg/repos

test # cp /root/FreeBSD.conf /usr/local/etc/pkg/repos/

Now update will work again.

test # pkg update
Updating FreeBSD-ports repository catalogue...
Fetching meta.conf: 100%    179 B   0.2kB/s    00:01    
Fetching data.pkg: 100%   10 MiB   5.4MB/s    00:02    
Processing entries: 100%
FreeBSD-ports repository update completed. 36390 packages processed.
Updating FreeBSD-ports-kmods repository catalogue...
Fetching meta.conf: 100%    179 B   0.2kB/s    00:01    
Fetching data.pkg: 100%   31 KiB  31.3kB/s    00:01    
Processing entries: 100%
FreeBSD-ports-kmods repository update completed. 204 packages processed.
Updating FreeBSD-base repository catalogue...
pkg: Repository FreeBSD-base has a wrong packagesite, need to re-create database
Fetching meta.conf: 100%    179 B   0.2kB/s    00:01    
Fetching data.pkg: 100%   80 KiB  81.5kB/s    00:01    
Processing entries: 100%
FreeBSD-base repository update completed. 496 packages processed.
All repositories are up to date.

You can even do PKGBASE upgrade if wanted.

test # pkg upgrade
Updating FreeBSD-ports repository catalogue...
FreeBSD-ports repository is up to date.
Updating FreeBSD-ports-kmods repository catalogue...
FreeBSD-ports-kmods repository is up to date.
Updating FreeBSD-base repository catalogue...
FreeBSD-base repository is up to date.
All repositories are up to date.
Checking for upgrades (4 candidates): 100%
Processing candidates (4 candidates): 100%
The following 4 package(s) will be affected (of 0 checked):

Installed packages to be UPGRADED:
        FreeBSD-kernel-generic: 15.0 -> 15.0p1 [FreeBSD-base]
        FreeBSD-rescue: 15.0 -> 15.0p1 [FreeBSD-base]
        FreeBSD-runtime: 15.0 -> 15.0p1 [FreeBSD-base]
        FreeBSD-utilities: 15.0 -> 15.0p1 [FreeBSD-base]

Number of packages to be upgraded: 4

62 MiB to be downloaded.

Proceed with this action? [y/N]: 

Poudriere in Proxy World

Now to another deeper level – like in Inception (2010) movie – the Poudriere package building harvester.

If you want to check more on the Poudriere itself you can check these:

Now – first we need to install poudriere-devel package as it has the latest features.

test # pkg install poudriere-devel ccache4 git nginx
Updating FreeBSD-ports repository catalogue...
FreeBSD-ports repository is up to date.
Updating FreeBSD-ports-kmods repository catalogue...
FreeBSD-ports-kmods repository is up to date.
Updating FreeBSD-base repository catalogue...
FreeBSD-base repository is up to date.
All repositories are up to date.
The following 28 package(s) will be affected (of 0 checked):

New packages to be INSTALLED:
        brotli: 1.1.0,1 [FreeBSD-ports]
        ccache4: 4.10.2_1 [FreeBSD-ports]
        curl: 8.16.0 [FreeBSD-ports]
        expat: 2.7.3 [FreeBSD-ports]
        gettext-runtime: 0.23.1 [FreeBSD-ports]
        git: 2.51.0 [FreeBSD-ports]
        indexinfo: 0.3.1_1 [FreeBSD-ports]
        libffi: 3.5.1 [FreeBSD-ports]
        libidn2: 2.3.8 [FreeBSD-ports]
        libnghttp2: 1.67.0 [FreeBSD-ports]
        libpsl: 0.21.5_2 [FreeBSD-ports]
        libssh2: 1.11.1,3 [FreeBSD-ports]
        libunistring: 1.4.1 [FreeBSD-ports]
        mpdecimal: 4.0.1 [FreeBSD-ports]
        nginx: 1.28.0_3,3 [FreeBSD-ports]
        p5-Authen-SASL: 2.1900 [FreeBSD-ports]
        p5-Crypt-URandom: 0.54 [FreeBSD-ports]
        p5-Digest-HMAC: 1.05 [FreeBSD-ports]
        p5-Error: 0.17030 [FreeBSD-ports]
        p5-IO-Socket-SSL: 2.095 [FreeBSD-ports]
        p5-MIME-Base32: 1.303 [FreeBSD-ports]
        p5-MIME-Base64: 3.16 [FreeBSD-ports]
        p5-Mozilla-CA: 20250602 [FreeBSD-ports]
        p5-Net-SSLeay: 1.94 [FreeBSD-ports]
        p5-URI: 5.32_1 [FreeBSD-ports]
        poudriere-devel: 3.4.99.20251213 [FreeBSD-ports]
        python311: 3.11.14 [FreeBSD-ports]
        readline: 8.2.13_2 [FreeBSD-ports]

Number of packages to be installed: 28

The process will require 283 MiB more space.
41 MiB to be downloaded.

Proceed with this action? [y/N]: y
[1/25] Fetching mpdecimal-4.0.1~f774e949d8.pkg: 100%  157 KiB 160.5kB/s    00:01    
(...)
[28/28] Installing git-2.51.0...
===> Creating groups
Creating group 'git_daemon' with gid '964'
===> Creating users
Creating user 'git_daemon' with uid '964'
[28/28] Extracting git-2.51.0: 100%

Basic Poudriere Setup

We will now setup some basic Poudriere setup.

test # export SSL=/usr/local/etc/ssl

test # mkdir -p \
              /usr/ports/distfiles \
              ${SSL}/keys \
              ${SSL}/certs

test # chmod 0600 ${SSL}/keys

test # openssl genrsa -out ${SSL}/keys/poudriere.key 4096

test # openssl rsa \
              -in  ${SSL}/keys/poudriere.key -pubout \
              -out ${SSL}/certs/poudriere.cert

test # mkdir /var/ccache

test # cat < /usr/local/etc/poudriere.conf
ZPOOL=zroot
FREEBSD_HOST=ftp://ftp.freebsd.org
BASEFS=/usr/local/poudriere
POUDRIERE_DATA=/usr/local/poudriere/data
DISTFILES_CACHE=/usr/ports/distfiles
CCACHE_DIR=/var/ccache
CHECK_CHANGED_OPTIONS=verbose
CHECK_CHANGED_DEPS=yes
PKG_REPO_SIGNING_KEY=/usr/local/etc/ssl/keys/poudriere.key
URL_BASE=http://0.0.0.0/
USE_TMPFS=no
TMPFS_LIMIT=12
MAX_MEMORY=12
PARALLEL_JOBS=4
PREPARE_PARALLEL_JOBS=4
MAX_FILES=4096
KEEP_OLD_PACKAGES=yes
KEEP_OLD_PACKAGES_COUNT=3
CHECK_CHANGED_OPTIONS=verbose
CHECK_CHANGED_DEPS=yes
RESTRICT_NETWORKING=no
PACKAGE_FETCH_URL="http://pkg.FreeBSD.org/\${ABI}"
PACKAGE_FETCH_BRANCH="latest"
export HTTP_PROXY="http://10.0.0.41:3128"
export HTTPS_PROXY="http://10.0.0.41:3128"
export FTP_PROXY="http://10.0.0.41:3128"
EOF

test # mkdir -p /usr/local/poudriere/data/logs/bulk

test # ln -s \
              /usr/local/etc/ssl/certs/poudriere.cert \
              /usr/local/poudriere/data/logs/bulk/poudriere.cert

test # cat < /usr/local/etc/poudriere.d/make.conf
# general
ALLOW_UNSUPPORTED_SYSTEM=yes
DISABLE_LICENSES=yes

# ccache(1)
WITH_CCACHE_BUILD=yes

# ports options
FORCE_MAKE_JOBS=yes
MAKE_JOBS_UNSAFE=yes
MAKE_JOBS_NUMBER=8
EOF

test # sed -i '' -E 's|text/plain[\t\ ]*txt|text/plain txt log|g' /usr/local/etc/nginx/mime.types

test # cat < /usr/local/etc/nginx/nginx.conf
events {
  worker_connections 1024;
}

http {
  include      mime.types;
  default_type application/octet-stream;

  server {
    listen 80 default;
    server_name 0.0.0.0;
    root /usr/local/share/poudriere/html;

    location /data {
      alias /usr/local/poudriere/data/logs/bulk;
      autoindex on;
    }

    location /packages {
      root /usr/local/poudriere/data;
      autoindex on;
    }
  }
}
EOF

test # mkdir /root/.cache

test # ln -sf /var/ccache /root/.cache/ccache

test # cat < /var/ccache/ccache.conf
max_size = 0
cache_dir = /var/ccache
base_dir = /var/ccache
hash_dir = false
EOF

Important Poudriere Config Part

The IMPORTANT settings here – to allow Poudriere function properly within proxy environment – are these five lines in the /usr/local/etc/poudriere.conf file.

export HTTP_PROXY="http://10.0.0.41:3128"
export HTTPS_PROXY="http://10.0.0.41:3128"
export FTP_PROXY="http://10.0.0.41:3128"
PACKAGE_FETCH_URL="http://pkg.FreeBSD.org/\${ABI}"
PACKAGE_FETCH_BRANCH="latest"

The first three are obvious – and YES – they need the export prefix to work.

The other two are less obvious … and I will show You why in a moment.

We need to create some FreeBSD jail – we will use 14.3-RELEASE as example. Use any version that you will be building packages for.

test # poudriere jail -c -j 14-3-R-amd64 -v 14.3-RELEASE
[00:00:00] Creating 14-3-R-amd64 fs at /usr/local/poudriere/jails/14-3-R-amd64... done
[00:00:00] Using pre-distributed MANIFEST for FreeBSD 14.3-RELEASE amd64
[00:00:00] Fetching base for FreeBSD 14.3-RELEASE amd64
base.txz                                               200 MB 5128 kBps    41s
[00:00:41] Extracting base... done
[00:00:49] Fetching src for FreeBSD 14.3-RELEASE amd64
src.txz                                                206 MB 4828 kBps    44s
[00:01:34] Extracting src... done
[00:01:46] Fetching lib32 for FreeBSD 14.3-RELEASE amd64
lib32.txz                                               60 MB   11 MBps    05s
[00:01:52] Extracting lib32... done
[00:01:54] Cleaning up... done
[00:01:54] Recording filesystem state for clean... done
[00:01:54] Upgrading using http
Looking up update.FreeBSD.org mirrors... none found.
Fetching public key from update.FreeBSD.org... done.
Fetching metadata signature for 14.3-RELEASE from update.FreeBSD.org... done.
Fetching metadata index... done.
Fetching 2 metadata files... done.
Inspecting system... done.
Preparing to download files... done.
Fetching 196 patches.....10....20....30....40....50....60....70....80....90....100....110....120....130....140....150....160....170....180....190... done.
Applying patches... done.
Fetching 40 files... ....10....20....30....40 done.
The following files will be removed as part of updating to
14.3-RELEASE-p7:
/usr/src/contrib/libarchive/libarchive/archive_getdate.c
/usr/src/contrib/libarchive/libarchive/archive_getdate.h
/usr/src/contrib/libarchive/libarchive/test/test_archive_getdate.c
(...)
/usr/src/usr.bin/tar/tests/Makefile
/usr/src/usr.sbin/freebsd-update/freebsd-update.sh
/usr/src/usr.sbin/rtsold/rtsol.c
Installing updates... done.
14.3-RELEASE-p7
[00:03:25] Recording filesystem state for clean... done
[00:03:25] Jail 14-3-R-amd64 14.3-RELEASE-p7 amd64 is ready to be used

test # poudriere jail -l
JAILNAME     VERSION         OSVERSION ARCH  METHOD TIMESTAMP           PATH
14-3-R-amd64 14.3-RELEASE-p7 1403000   amd64 http   2026-01-07 01:33:45 /usr/local/poudriere/jails/14-3-R-amd64

We also need FreeBSD Ports tree … just in a Poudriere way.

test # poudriere ports -c
[00:00:00] Creating default fs at /usr/local/poudriere/ports/default... done
[00:00:00] Cloning the ports tree...
fatal: unable to access 'https://git.FreeBSD.org/ports.git/': Could not resolve host: git.FreeBSD.org
[00:00:45] Error: /usr/local/share/poudriere/ports.sh:303: fail
[00:00:45] Error while creating ports tree, cleaning up.

This is where the dedicated git(1) config is needed as its a bitch and ignores *_PROXY variables 🙂

test # git config --system http.proxy http://10.0.0.41:3128

test # poudriere ports -c
[00:00:00] Creating default fs at /usr/local/poudriere/ports/default... done
[00:00:00] Cloning the ports tree... done

test # poudriere ports -l
PORTSTREE METHOD    TIMESTAMP           PATH
default   git+https 2026-01-07 01:49:46 /usr/local/poudriere/ports/default

Works.

Now – lets try to actually build something with Poudriere.

We will try two ports that one needs to be actually build (dosunix) and one that is in POSIX sh(1) and does not need building (lsblk).

I will intentionally run first building process with proxy variables disabled like this in the /usr/local/etc/poudriere.conf file:

# PACKAGE_FETCH_URL="http://pkg.FreeBSD.org/\${ABI}"
# PACKAGE_FETCH_BRANCH="latest"
# export HTTP_PROXY="http://10.0.0.41:3128"
# export HTTPS_PROXY="http://10.0.0.41:3128"
# export FTP_PROXY="http://10.0.0.41:3128"

Here.

test # poudriere bulk -c -C -j 14-3-R-amd64 -b latest -p default sysutils/lsblk converters/dosunix

Result is below … and as expected it failed.

test # poudriere bulk -c -C -j 14-3-R-amd64 -b latest -p default sysutils/lsblk converters/dosunix
[00:00:00] Creating the reference jail... done
[00:00:00] Mounting system devices for 14-3-R-amd64-default
[00:00:00] Stashing existing package repository
[00:00:00] Mounting ccache from: /var/ccache
[00:00:00] Mounting ports from: /usr/local/poudriere/ports/default
[00:00:00] Mounting packages from: /usr/local/poudriere/data/packages/14-3-R-amd64-default
[00:00:00] Mounting distfiles from: /usr/ports/distfiles
[00:00:00] Appending to make.conf: /usr/local/etc/poudriere.d/make.conf
/etc/resolv.conf -> /usr/local/poudriere/data/.m/14-3-R-amd64-default/ref/etc/resolv.conf
[00:00:00] Starting jail 14-3-R-amd64-default
Updating /var/run/os-release done.
[00:00:00] Will build as root:wheel (0:0)
[00:00:00] Ports supports: FLAVORS SUBPACKAGES SELECTED_OPTIONS
[00:00:00] Inspecting /usr/local/poudriere/data/.m/14-3-R-amd64-default/ref//usr/ports for modifications to git checkout... no
[00:00:03] Ports top-level git hash: 284813ec0382a2bfe5b2e74a3081a67599d3155d
[00:00:03] Acquiring build logs lock for 14-3-R-amd64-default... done
[00:00:03] Logs: /usr/local/poudriere/data/logs/bulk/14-3-R-amd64-default/2026-01-07_01h58m23s
[00:00:03] WWW: http://0.0.0.0//build.html?mastername=14-3-R-amd64-default&build=2026-01-07_01h58m23s
[00:00:03] Loading MOVED for /usr/local/poudriere/data/.m/14-3-R-amd64-default/ref/usr/ports
[00:00:04] Gathering ports metadata
[00:00:04] Calculating ports order and dependencies
[00:00:04] Sanity checking the repository
[00:00:04] -c specified, cleaning all packages... done
[00:00:04] -C specified, cleaning listed packages
[00:00:04] (-C) Flushing package deletions
[00:00:04] Trimming IGNORED and blacklisted ports
[00:00:04] Package fetch: Looking for missing packages to fetch from pkg+http://pkg.FreeBSD.org/${ABI}/latest
[00:00:04] Package fetch: bootstrapping pkg
Bootstrapping pkg from pkg+http://pkg.FreeBSD.org/FreeBSD:14:amd64/latest, please wait...
pkg: Attempted to fetch pkg+http://pkg.FreeBSD.org/FreeBSD:14:amd64/latest/Latest/pkg.pkg
pkg: Attempted to fetch pkg+http://pkg.FreeBSD.org/FreeBSD:14:amd64/latest/Latest/pkg.txz
pkg: Error: Address family for host not supported
Address resolution failed for http://pkg.FreeBSD.org/FreeBSD:14:amd64/latest.
[00:08:58] Package fetch: Not fetching as remote repository is unavailable.
[00:08:58] pkg bootstrap missing: unable to inspect existing packages, cleaning all packages... done
[00:08:58] Deleting stale symlinks... done
[00:08:58] Deleting empty directories... done
[00:08:58] Unqueueing existing packages
[00:08:58] Unqueueing orphaned build dependencies
[00:08:58] Sanity checking build queue
[00:08:58] [14-3-R-amd64-default] [2026-01-07_01h58m23s] [pkgqueue_sanity_check] Time: 00:08:54
           Queued: 4 Inspected: 0 Ignored: 0 Built: 0 Failed: 0 Skipped: 0 Fetched: 0 Remaining: 4
[00:08:58] Recording filesystem state for prepkg... done
[00:08:58] Processing PRIORITY_BOOST
[00:08:58] Building 4 packages using up to 4 builders
[00:08:58] Hit CTRL+t at any time to see build progress and stats
[00:08:58] [01] [00:00:00] Builder starting
[00:08:58] [01] [00:00:00] Builder started
[00:08:58] [01] [00:00:00] Building   ports-mgmt/pkg | pkg-2.5.1
[00:11:56] [01] [00:02:58] Finished   ports-mgmt/pkg | pkg-2.5.1: Failed: fetch
[00:11:56] [01] [00:02:58] Skipping   devel/ccache | ccache-3.7.12_8: Dependent port ports-mgmt/pkg | pkg-2.5.1 failed
[00:11:56] [01] [00:02:58] Skipping   converters/dosunix | dosunix-1.0.14: Dependent port ports-mgmt/pkg | pkg-2.5.1 failed
[00:11:56] [01] [00:02:58] Skipping   sysutils/lsblk | lsblk-4.0: Dependent port ports-mgmt/pkg | pkg-2.5.1 failed
[00:11:57] Stopping up to 4 builders
[00:11:57] Creating pkg repository
[00:11:57] No packages present
[00:11:57] Committing packages to repository: /usr/local/poudriere/data/packages/14-3-R-amd64-default/.real_1767751820 via .latest symlink
[00:11:57] Removing old packages
[00:11:57] Failed ports: ports-mgmt/pkg:fetch
[00:11:57] Skipped ports: converters/dosunix devel/ccache sysutils/lsblk
[00:11:57] [14-3-R-amd64-default] [2026-01-07_01h58m23s] [committing] Time: 00:11:53
           Queued: 4 Inspected: 0 Ignored: 0 Built: 0 Failed: 1 Skipped: 3 Fetched: 0 Remaining: 0
[00:11:57] Logs: /usr/local/poudriere/data/logs/bulk/14-3-R-amd64-default/2026-01-07_01h58m23s
[00:11:57] WWW: http://0.0.0.0//build.html?mastername=14-3-R-amd64-default&build=2026-01-07_01h58m23s
[00:11:57] Cleaning up
[00:11:57] Stopping up to 4 builders
[00:11:57] Unmounting file systems
test # 

Now I will run it again but with proxy setting enabled in the /usr/local/etc/poudriere.conf file like that.

# PACKAGE_FETCH_URL="http://pkg.FreeBSD.org/\${ABI}"
# PACKAGE_FETCH_BRANCH="latest"
export HTTP_PROXY="http://10.0.0.41:3128"
export HTTPS_PROXY="http://10.0.0.41:3128"
export FTP_PROXY="http://10.0.0.41:3128"

It should work with one small caveat.

test # poudriere bulk -c -C -j 14-3-R-amd64 -b latest -p default sysutils/lsblk converters/dosunix
[00:00:00] Creating the reference jail... done
[00:00:00] Mounting system devices for 14-3-R-amd64-default
[00:00:00] Stashing existing package repository
[00:00:00] Mounting ccache from: /var/ccache
[00:00:00] Mounting ports from: /usr/local/poudriere/ports/default
[00:00:00] Mounting packages from: /usr/local/poudriere/data/packages/14-3-R-amd64-default
[00:00:00] Mounting distfiles from: /usr/ports/distfiles
[00:00:00] Appending to make.conf: /usr/local/etc/poudriere.d/make.conf
/etc/resolv.conf -> /usr/local/poudriere/data/.m/14-3-R-amd64-default/ref/etc/resolv.conf
[00:00:00] Starting jail 14-3-R-amd64-default
Updating /var/run/os-release done.
[00:00:00] Will build as root:wheel (0:0)
[00:00:01] Ports supports: FLAVORS SUBPACKAGES SELECTED_OPTIONS
[00:00:01] Inspecting /usr/local/poudriere/data/.m/14-3-R-amd64-default/ref//usr/ports for modifications to git checkout... no
[00:00:04] Ports top-level git hash: 284813ec0382a2bfe5b2e74a3081a67599d3155d 
[00:00:04] Acquiring build logs lock for 14-3-R-amd64-default... done
[00:00:04] Logs: /usr/local/poudriere/data/logs/bulk/14-3-R-amd64-default/2026-01-07_02h13m56s
[00:00:04] WWW: http://0.0.0.0//build.html?mastername=14-3-R-amd64-default&build=2026-01-07_02h13m56s
[00:00:04] Loading MOVED for /usr/local/poudriere/data/.m/14-3-R-amd64-default/ref/usr/ports
[00:00:04] Gathering ports metadata
[00:00:04] Calculating ports order and dependencies
[00:00:04] Sanity checking the repository
[00:00:04] -c specified, cleaning all packages... done
[00:00:04] -C specified, cleaning listed packages
[00:00:04] (-C) Flushing package deletions
[00:00:04] Trimming IGNORED and blacklisted ports
[00:00:04] Package fetch: Looking for missing packages to fetch from pkg+http://pkg.FreeBSD.org/${ABI}/latest
[00:00:04] Package fetch: bootstrapping pkg
Bootstrapping pkg from pkg+http://pkg.FreeBSD.org/FreeBSD:14:amd64/latest, please wait...
[14-3-R-amd64-default] Installing pkg-2.5.1...
[14-3-R-amd64-default] Extracting pkg-2.5.1: 100%
Updating Poudriere repository catalogue...
pkg: No SRV record found for the repo 'Poudriere'
[14-3-R-amd64-default] Fetching meta.conf: 100%    179 B   0.2 k/s    00:01    
pkg: packagesite URL error for pkg+http://pkg.FreeBSD.org/FreeBSD:14:amd64/latest/data.pkg -- pkg+:// implies SRV mirror type
pkg: packagesite URL error for pkg+http://pkg.FreeBSD.org/FreeBSD:14:amd64/latest/data.tzst -- pkg+:// implies SRV mirror type
pkg: packagesite URL error for pkg+http://pkg.FreeBSD.org/FreeBSD:14:amd64/latest/packagesite.pkg -- pkg+:// implies SRV mirror type
pkg: packagesite URL error for pkg+http://pkg.FreeBSD.org/FreeBSD:14:amd64/latest/packagesite.tzst -- pkg+:// implies SRV mirror type
Unable to update repository Poudriere
Error updating repositories!
[00:00:29] Package fetch: Not fetching as remote repository is unavailable.
[00:00:29] pkg bootstrap missing: unable to inspect existing packages, cleaning all packages... done
[00:00:29] Deleting stale symlinks... done
[00:00:29] Deleting empty directories... done
[00:00:29] Unqueueing existing packages
[00:00:29] Unqueueing orphaned build dependencies
[00:00:29] Sanity checking build queue
[00:00:29] [14-3-R-amd64-default] [2026-01-07_02h13m56s] [pkgqueue_sanity_check] Time: 00:00:26
           Queued: 4 Inspected: 0 Ignored: 0 Built: 0 Failed: 0 Skipped: 0 Fetched: 0 Remaining: 4
[00:00:29] Recording filesystem state for prepkg... done
[00:00:29] Processing PRIORITY_BOOST
[00:00:30] Building 4 packages using up to 4 builders
[00:00:30] Hit CTRL+t at any time to see build progress and stats
[00:00:30] [01] [00:00:00] Builder starting
[00:00:30] [01] [00:00:00] Builder started
[00:00:30] [01] [00:00:00] Building   ports-mgmt/pkg | pkg-2.5.1
[00:02:34] [01] [00:02:04] Finished   ports-mgmt/pkg | pkg-2.5.1: Success
[00:02:34] [02] [00:00:00] Builder starting
[00:02:34] [01] [00:00:00] Building   devel/ccache | ccache-3.7.12_8
[00:02:35] [02] [00:00:01] Builder started
[00:02:35] [02] [00:00:00] Building   sysutils/lsblk | lsblk-4.0
[00:02:36] [02] [00:00:01] Finished   sysutils/lsblk | lsblk-4.0: Success
[00:02:39] [01] [00:00:05] Finished   devel/ccache | ccache-3.7.12_8: Success
[00:02:39] [01] [00:00:00] Building   converters/dosunix | dosunix-1.0.14
[00:02:42] [01] [00:00:03] Finished   converters/dosunix | dosunix-1.0.14: Success
[00:02:42] Stopping up to 4 builders
[00:02:42] Creating pkg repository
[00:02:42] Signing repository with key: /usr/local/etc/ssl/keys/poudriere.key
Creating repository in /tmp/packages: 100%
Packing files for repository: 100%
[00:02:43] Signing pkg bootstrap with method: pubkey
[00:02:43] Committing packages to repository: /usr/local/poudriere/data/packages/14-3-R-amd64-default/.real_1767752199 via .latest symlink
[00:02:43] Removing old packages
[00:02:43] Built ports: ports-mgmt/pkg sysutils/lsblk devel/ccache converters/dosunix
[00:02:43] [14-3-R-amd64-default] [2026-01-07_02h13m56s] [committing] Time: 00:02:39
           Queued: 4 Inspected: 0 Ignored: 0 Built: 4 Failed: 0 Skipped: 0 Fetched: 0 Remaining: 0
[00:02:43] Logs: /usr/local/poudriere/data/logs/bulk/14-3-R-amd64-default/2026-01-07_02h13m56s
[00:02:43] WWW: http://0.0.0.0//build.html?mastername=14-3-R-amd64-default&build=2026-01-07_02h13m56s
[00:02:43] Cleaning up
[00:02:43] Stopping up to 4 builders
[00:02:43] Unmounting file systems

Also in nice graphical colored form.

The build generally ended successfully – we have our packages available.

test # ls -l /usr/local/poudriere/data/packages/14-3-R-amd64-default/All/
total 6294
-rw-r--r--  1 root wheel  126041 Jan  7 02:16 ccache-3.7.12_8.pkg
-rw-r--r--  1 root wheel    5963 Jan  7 02:16 dosunix-1.0.14.pkg
-rw-r--r--  1 root wheel    6544 Jan  7 02:16 lsblk-4.0.pkg
-rw-r--r--  1 root wheel 6290261 Jan  7 02:16 pkg-2.5.1.pkg

The only errors were these below and they did not broken our build:

(...)
Updating Poudriere repository catalogue...
pkg: No SRV record found for the repo 'Poudriere'
[14-3-R-amd64-default] Fetching meta.conf: 100%    179 B   0.2 k/s    00:01    
pkg: packagesite URL error for pkg+http://pkg.FreeBSD.org/FreeBSD:14:amd64/latest/data.pkg -- pkg+:// implies SRV mirror type
pkg: packagesite URL error for pkg+http://pkg.FreeBSD.org/FreeBSD:14:amd64/latest/data.tzst -- pkg+:// implies SRV mirror type
pkg: packagesite URL error for pkg+http://pkg.FreeBSD.org/FreeBSD:14:amd64/latest/packagesite.pkg -- pkg+:// implies SRV mirror type
pkg: packagesite URL error for pkg+http://pkg.FreeBSD.org/FreeBSD:14:amd64/latest/packagesite.tzst -- pkg+:// implies SRV mirror type
Unable to update repository Poudriere
Error updating repositories!
[00:00:29] Package fetch: Not fetching as remote repository is unavailable.
(...)

To get the idea what is wrong here we need to level up our debugging skill and get our hands dirty with FreeBSD tools like ktrace(8) and kdump(8) to know what is missing.

test # ktrace -di poudriere bulk -c -C -j 14-3-R-amd64 -b latest -p default sysutils/lsblk converters/dosunix
(...)
[00:00:32] [01] [00:00:00] Builder starting
[00:00:33] [01] [00:00:01] Builder started
[00:00:33] [01] [00:00:00] Building   ports-mgmt/pkg | pkg-2.5.1

When You reach this place – just hit [CTRL]+[C] to stop it – its not needed to wait for it.

Now check with kdump(8) the gathered data.

test # kdump | grep -m 5 -C 2 'pkg+' | tail -4
        Poudriere: {
        url: pkg+http://pkg.FreeBSD.org/${ABI}/latest,
        mirror_type: srv
        }

I wanted to show only the part that was important – Of course I did not guessed it like that … just find it with a help of a friend.

Poudriere – on the fly – defines additional repository … and if you did not override it – yes – it will use both pkg+ and srv things that hurt our proxy environment.

This is part of the Poudriere code that is responsible for its generation.

cat >> "${MASTERMNT:?}/etc/pkg/poudriere.conf" <<-EOF
FreeBSD: {
        enabled: no,
        priority: 100
}
FreeBSD-kmods: {
        enabled: no,
        priority: 100
}
FreeBSD-ports: {
        enabled: no,
        priority: 100
}
FreeBSD-ports-kmods: {
        enabled: no,
        priority: 100
}
FreeBSD-base: {
        enabled: no,
        priority: 100
}

Poudriere: {
        url: ${packagesite},
        mirror_type: $(if [ "${packagesite#pkg+}" = "${packagesite}" ]; then echo "none"; else echo "srv"; fi)
}
EOF

So it will generate the ‘broken for proxy’ config like that:

Poudriere: {
        url: pkg+http://pkg.FreeBSD.org/${ABI}/latest,
        mirror_type: srv
}

Looking at the code above you can see the if that checks how the packagesite is defined.

Lets see how Poudriere figures that one out in the code.

test # grep -m 1 packagesite= common.sh
        packagesite="${PACKAGE_FETCH_URL:+${PACKAGE_FETCH_URL}/}${PACKAGE_FETCH_BRANCH}"

So the answer is that we need to set PACKAGE_FETCH_URL and PACKAGE_FETCH_BRANCH in the /usr/local/etc/poudriere.conf file … the ones I commented out to explain all that in details.

So now – with all needed settings enabled at /usr/local/etc/poudriere.conf file … fully working Poudriere build in proxy environment.

PACKAGE_FETCH_URL="http://pkg.FreeBSD.org/\${ABI}"
PACKAGE_FETCH_BRANCH="latest"
export HTTP_PROXY="http://10.0.0.41:3128"
export HTTPS_PROXY="http://10.0.0.41:3128"
export FTP_PROXY="http://10.0.0.41:3128"

Remember to have the \$ as backslashed as Poudriere is written in POSIX sh(1).

Now the fully working run.

test # poudriere bulk -c -C -j 14-3-R-amd64 -b latest -p default sysutils/lsblk converters/dosunix
[00:00:00] Creating the reference jail... done
[00:00:00] Mounting system devices for 14-3-R-amd64-default
[00:00:00] Stashing existing package repository
[00:00:00] Mounting ccache from: /var/ccache
[00:00:00] Mounting ports from: /usr/local/poudriere/ports/default
[00:00:00] Mounting packages from: /usr/local/poudriere/data/packages/14-3-R-amd64-default
[00:00:00] Mounting distfiles from: /usr/ports/distfiles
[00:00:00] Appending to make.conf: /usr/local/etc/poudriere.d/make.conf
/etc/resolv.conf -> /usr/local/poudriere/data/.m/14-3-R-amd64-default/ref/etc/resolv.conf
[00:00:00] Starting jail 14-3-R-amd64-default
Updating /var/run/os-release done.
[00:00:00] Will build as root:wheel (0:0)
[00:00:00] Ports supports: FLAVORS SUBPACKAGES SELECTED_OPTIONS
[00:00:00] Inspecting /usr/local/poudriere/data/.m/14-3-R-amd64-default/ref//usr/ports for modifications to git checkout... no
[00:00:03] Ports top-level git hash: 284813ec0382a2bfe5b2e74a3081a67599d3155d 
[00:00:03] Acquiring build logs lock for 14-3-R-amd64-default... done
[00:00:03] Logs: /usr/local/poudriere/data/logs/bulk/14-3-R-amd64-default/2026-01-07_03h52m21s
[00:00:03] WWW: http://0.0.0.0//build.html?mastername=14-3-R-amd64-default&build=2026-01-07_03h52m21s
[00:00:03] Loading MOVED for /usr/local/poudriere/data/.m/14-3-R-amd64-default/ref/usr/ports
[00:00:04] Gathering ports metadata
[00:00:04] Calculating ports order and dependencies
[00:00:04] Sanity checking the repository
[00:00:04] -c specified, cleaning all packages... done
[00:00:04] -C specified, cleaning listed packages
[00:00:04] (-C) Flushing package deletions
[00:00:04] Trimming IGNORED and blacklisted ports
[00:00:04] Package fetch: Looking for missing packages to fetch from http://pkg.FreeBSD.org/${ABI}/latest
[00:00:04] Package fetch: bootstrapping pkg
Bootstrapping pkg from http://pkg.FreeBSD.org/FreeBSD:14:amd64/latest, please wait...
[14-3-R-amd64-default] Installing pkg-2.5.1...
[14-3-R-amd64-default] Extracting pkg-2.5.1: 100%
Updating Poudriere repository catalogue...
[14-3-R-amd64-default] Fetching meta.conf: 100%    179 B   0.2 k/s    00:01    
[14-3-R-amd64-default] Fetching data: 100%   11 MiB   3.7 M/s    00:03    
Processing entries: 100%
Poudriere repository update completed. 36661 packages processed.
All repositories are up to date.
[00:00:20] Package fetch: Will fetch 3 packages from remote or local pkg cache
Updating database digests format: 100%
The following packages will be fetched:

New packages to be FETCHED:
        ccache: 3.7.12_8 (134 KiB: 91.17% of the 147 KiB to download)
        dosunix: 1.0.14 (6 KiB: 4.04% of the 147 KiB to download)
        lsblk: 4.0 (7 KiB: 4.79% of the 147 KiB to download)

Number of packages to be fetched: 3

147 KiB to be downloaded.
[14-3-R-amd64-default] Fetching ccache-3.7.12_8: 100%  134 KiB 137.3 k/s    00:01    
[14-3-R-amd64-default] Fetching lsblk-4.0: 100%    7 KiB   7.2 k/s    00:01    
[14-3-R-amd64-default] Fetching dosunix-1.0.14: 100%    6 KiB   6.1 k/s    00:01    
[00:00:21] Package fetch: Using cached copy of ccache-3.7.12_8
[00:00:21] Package fetch: Using cached copy of dosunix-1.0.14
[00:00:21] Package fetch: Using cached copy of lsblk-4.0
[00:00:21] Checking packages for incremental rebuild needs
[00:00:21] Deleting stale symlinks... done
[00:00:21] Deleting empty directories... done
[00:00:21] Package fetch: Generating logs for fetched packages
[00:00:21] Unqueueing existing packages
[00:00:21] Unqueueing orphaned build dependencies
[00:00:21] Sanity checking build queue
[00:00:21] [14-3-R-amd64-default] [2026-01-07_03h52m21s] [pkgqueue_sanity_check] Time: 00:00:18
           Queued: 4 Inspected: 0 Ignored: 0 Built: 0 Failed: 0 Skipped: 0 Fetched: 3 Remaining: 1
[00:00:21] Recording filesystem state for prepkg... done
[00:00:21] Processing PRIORITY_BOOST
[00:00:21] Building 1 packages using up to 1 builders
[00:00:21] Hit CTRL+t at any time to see build progress and stats
[00:00:21] [01] [00:00:00] Builder starting
[00:00:21] [01] [00:00:00] Builder started
[00:00:21] [01] [00:00:00] Building   ports-mgmt/pkg | pkg-2.5.1
[00:02:25] [01] [00:02:04] Finished   ports-mgmt/pkg | pkg-2.5.1: Success
[00:02:25] Stopping up to 1 builders
[00:02:25] Creating pkg repository
[00:02:25] Signing repository with key: /usr/local/etc/ssl/keys/poudriere.key
Creating repository in /tmp/packages: 100%
Packing files for repository: 100%
[00:02:25] Signing pkg bootstrap with method: pubkey
[00:02:25] Committing packages to repository: /usr/local/poudriere/data/packages/14-3-R-amd64-default/.real_1767758087 via .latest symlink
[00:02:25] Removing old packages
[00:02:25] Built ports: ports-mgmt/pkg
[00:02:25] Fetched ports: sysutils/lsblk converters/dosunix devel/ccache
[00:02:25] [14-3-R-amd64-default] [2026-01-07_03h52m21s] [committing] Time: 00:02:23
           Queued: 4 Inspected: 0 Ignored: 0 Built: 1 Failed: 0 Skipped: 0 Fetched: 3 Remaining: 0
[00:02:25] Logs: /usr/local/poudriere/data/logs/bulk/14-3-R-amd64-default/2026-01-07_03h52m21s
[00:02:25] WWW: http://0.0.0.0//build.html?mastername=14-3-R-amd64-default&build=2026-01-07_03h52m21s
[00:02:25] Cleaning up
[00:02:25] Stopping up to 4 builders
[00:02:25] Unmounting file systems
test # 

… and in the TECHNICOLOR form 🙂

The part that was broken earlier is now fine.

Updating Poudriere repository catalogue...
[14-3-R-amd64-default] Fetching meta.conf: 100%    179 B   0.2 k/s    00:01    
[14-3-R-amd64-default] Fetching data: 100%   11 MiB   3.7 M/s    00:03    
Processing entries: 100%

I believe that concludes this article – let me know if I missed anything.

EOF
Top

Show us your refcompressratio

Post by Dan Langille via Dan Langille's Other Diary »

Following from a recent toot, I’ve decide to expose some very personal data.

aws-1

[18:50 aws-1 dvl ~] % zfs get refcompressratio -t filesystem
NAME                                                          PROPERTY          VALUE     SOURCE
data01                                                        refcompressratio  1.00x     -
data01/freshports                                             refcompressratio  1.00x     -
data01/freshports/ingress01                                   refcompressratio  1.00x     -
data01/freshports/ingress01/jails                             refcompressratio  1.00x     -
data01/freshports/ingress01/jails/freshports                  refcompressratio  1.91x     -
data01/freshports/ingress01/jails/freshports.14.3             refcompressratio  1.91x     -
data01/freshports/ingress01/jails/ports                       refcompressratio  1.20x     -
data01/freshports/ingress01/var                               refcompressratio  1.00x     -
data01/freshports/ingress01/var/db                            refcompressratio  1.00x     -
data01/freshports/ingress01/var/db/freshports                 refcompressratio  1.99x     -
data01/freshports/ingress01/var/db/freshports/cache           refcompressratio  1.00x     -
data01/freshports/ingress01/var/db/freshports/cache/html      refcompressratio  1.18x     -
data01/freshports/ingress01/var/db/freshports/cache/spooling  refcompressratio  1.00x     -
data01/freshports/ingress01/var/db/freshports/message-queues  refcompressratio  2.74x     -
data01/freshports/ingress01/var/db/ingress                    refcompressratio  1.00x     -
data01/freshports/ingress01/var/db/ingress/message-queues     refcompressratio  1.00x     -
data01/freshports/ingress01/var/db/ingress/repos              refcompressratio  1.02x     -
data01/freshports/jailed                                      refcompressratio  1.00x     -
data01/freshports/jailed/ingress01                            refcompressratio  1.00x     -
data01/freshports/jailed/ingress01/jails                      refcompressratio  1.00x     -
data01/freshports/jailed/ingress01/mkjail                     refcompressratio  1.00x     -
data01/freshports/jailed/ingress01/mkjail/14.3-RELEASE        refcompressratio  2.10x     -
data01/freshports/jailed/ingress01/mkjail/15.0-RELEASE        refcompressratio  2.02x     -
data01/freshports/nginx01                                     refcompressratio  1.00x     -
data01/freshports/nginx01/var                                 refcompressratio  1.00x     -
data01/freshports/nginx01/var/db                              refcompressratio  1.00x     -
data01/freshports/nginx01/var/db/freshports                   refcompressratio  1.00x     -
data01/freshports/nginx01/var/db/freshports/cache             refcompressratio  1.00x     -
data01/freshports/nginx01/var/db/freshports/cache/categories  refcompressratio  8.26x     -
data01/freshports/nginx01/var/db/freshports/cache/commits     refcompressratio  1.22x     -
data01/freshports/nginx01/var/db/freshports/cache/daily       refcompressratio  6.57x     -
data01/freshports/nginx01/var/db/freshports/cache/general     refcompressratio  2.84x     -
data01/freshports/nginx01/var/db/freshports/cache/news        refcompressratio  4.06x     -
data01/freshports/nginx01/var/db/freshports/cache/packages    refcompressratio  3.31x     -
data01/freshports/nginx01/var/db/freshports/cache/pages       refcompressratio  1.00x     -
data01/freshports/nginx01/var/db/freshports/cache/ports       refcompressratio  2.83x     -
data01/freshports/nginx01/var/db/freshports/cache/spooling    refcompressratio  1.00x     -
data01/jails                                                  refcompressratio  1.00x     -
data01/jails-tmp                                              refcompressratio  1.00x     -
data01/jails/ingress01                                        refcompressratio  1.18x     -
data01/jails/nginx01                                          refcompressratio  1.79x     -
data01/mkjail                                                 refcompressratio  1.00x     -
data01/mkjail/14.2-RELEASE                                    refcompressratio  2.05x     -
data01/mkjail/14.3-RELEASE                                    refcompressratio  2.10x     -
data01/mkjail/15.0-RELEASE                                    refcompressratio  2.02x     -
data01/reserved                                               refcompressratio  1.00x     -
data01/rsyncer                                                refcompressratio  1.00x     -
zroot                                                         refcompressratio  1.00x     -
zroot/ROOT                                                    refcompressratio  1.00x     -
zroot/ROOT/14.3-RELEASE-p7_2026-01-27_225127                  refcompressratio  1.17x     -
zroot/ROOT/14.3-RELEASE-p8_2026-02-15_161308                  refcompressratio  1.17x     -
zroot/ROOT/15.0-RELEASE-p2_2026-02-15_161850                  refcompressratio  1.17x     -
zroot/ROOT/15.0-RELEASE-p3_2026-02-15_163309                  refcompressratio  1.14x     -
zroot/ROOT/default                                            refcompressratio  1.14x     -
zroot/freebsd_releases                                        refcompressratio  1.00x     -
zroot/freshports                                              refcompressratio  1.00x     -
zroot/mkjail                                                  refcompressratio  1.00x     -
zroot/reserved                                                refcompressratio  1.00x     -
zroot/tmp                                                     refcompressratio  1.05x     -
zroot/usr                                                     refcompressratio  1.00x     -
zroot/usr/home                                                refcompressratio  1.03x     -
zroot/usr/ports                                               refcompressratio  1.00x     -
zroot/usr/src                                                 refcompressratio  1.00x     -
zroot/var                                                     refcompressratio  1.00x     -
zroot/var/audit                                               refcompressratio  1.00x     -
zroot/var/crash                                               refcompressratio  1.04x     -
zroot/var/log                                                 refcompressratio  7.10x     -
zroot/var/mail                                                refcompressratio  1.00x     -
zroot/var/tmp                                                 refcompressratio  2.89x     -

gw01

[18:50 gw01 dvl ~] % zfs get refcompressratio -t filesystem
NAME                                          PROPERTY          VALUE     SOURCE
zroot                                         refcompressratio  1.00x     -
zroot/ROOT                                    refcompressratio  1.00x     -
zroot/ROOT/14.3-RELEASE-p8_2026-02-15_190716  refcompressratio  1.29x     -
zroot/ROOT/15.0-RELEASE-p2_2026-02-15_191130  refcompressratio  1.30x     -
zroot/ROOT/15.0-RELEASE-p3_2026-02-15_191659  refcompressratio  1.28x     -
zroot/ROOT/before.15.0                        refcompressratio  1.31x     -
zroot/ROOT/default                            refcompressratio  1.28x     -
zroot/home                                    refcompressratio  1.00x     -
zroot/home/dvl                                refcompressratio  1.00x     -
zroot/jails                                   refcompressratio  1.00x     -
zroot/jails/ex1                               refcompressratio  2.00x     -
zroot/jails/ex2                               refcompressratio  2.00x     -
zroot/mkjail                                  refcompressratio  1.00x     -
zroot/mkjail/14.3-RELEASE                     refcompressratio  2.10x     -
zroot/reserved                                refcompressratio  1.00x     -
zroot/tmp                                     refcompressratio  1.06x     -
zroot/usr                                     refcompressratio  1.00x     -
zroot/usr/ports                               refcompressratio  1.00x     -
zroot/usr/src                                 refcompressratio  1.00x     -
zroot/var                                     refcompressratio  1.00x     -
zroot/var/audit                               refcompressratio  1.00x     -
zroot/var/crash                               refcompressratio  1.01x     -
zroot/var/log                                 refcompressratio  6.93x     -
zroot/var/mail                                refcompressratio  1.00x     -
zroot/var/tmp                                 refcompressratio  2.43x     -

nagios03

[18:50 r720-02 dvl ~] % zfs get refcompressratio -t filesystem
NAME                                                                     PROPERTY          VALUE     SOURCE
data01                                                                   refcompressratio  1.00x     -
data01/backups                                                           refcompressratio  1.00x     -
data01/backups/rscyncer                                                  refcompressratio  1.00x     -
data01/backups/rscyncer/backups                                          refcompressratio  1.00x     -
data01/backups/rscyncer/backups/Bacula                                   refcompressratio  3.06x     -
data01/backups/rscyncer/backups/bacula-database                          refcompressratio  2.27x     -
data01/freebsd_releases                                                  refcompressratio  1.00x     -
data01/freshports                                                        refcompressratio  1.00x     -
data01/freshports/ingress01                                              refcompressratio  1.00x     -
data01/freshports/ingress01/jails                                        refcompressratio  1.00x     -
data01/freshports/ingress01/jails/freshports                             refcompressratio  1.91x     -
data01/freshports/ingress01/jails/freshports.14.3                        refcompressratio  2.00x     -
data01/freshports/ingress01/ports                                        refcompressratio  1.19x     -
data01/freshports/ingress01/var                                          refcompressratio  1.00x     -
data01/freshports/ingress01/var/db                                       refcompressratio  1.00x     -
data01/freshports/ingress01/var/db/freshports                            refcompressratio  1.00x     -
data01/freshports/ingress01/var/db/freshports/cache                      refcompressratio  1.00x     -
data01/freshports/ingress01/var/db/freshports/cache/html                 refcompressratio  1.18x     -
data01/freshports/ingress01/var/db/freshports/cache/spooling             refcompressratio  1.00x     -
data01/freshports/ingress01/var/db/freshports/message-queues             refcompressratio  2.58x     -
data01/freshports/ingress01/var/db/freshports/repos                      refcompressratio  1.00x     -
data01/freshports/ingress01/var/db/ingress                               refcompressratio  1.00x     -
data01/freshports/ingress01/var/db/ingress/message-queues                refcompressratio  1.00x     -
data01/freshports/ingress01/var/db/ingress/repos                         refcompressratio  1.00x     -
data01/freshports/ingress01/var/db/ingress/repos/doc                     refcompressratio  1.24x     -
data01/freshports/ingress01/var/db/ingress/repos/ports                   refcompressratio  1.17x     -
data01/freshports/ingress01/var/db/ingress/repos/src                     refcompressratio  1.31x     -
data01/freshports/ingress01/var/db/ingress_svn.DELETE.ME                 refcompressratio  1.00x     -
data01/freshports/ingress01/var/db/ingress_svn.DELETE.ME/message_queues  refcompressratio  1.00x     -
data01/freshports/jailed                                                 refcompressratio  1.00x     -
data01/freshports/jailed/ingress01                                       refcompressratio  1.00x     -
data01/freshports/jailed/ingress01/distfiles                             refcompressratio  1.00x     -
data01/freshports/jailed/ingress01/mkjail                                refcompressratio  1.00x     -
data01/freshports/jailed/ingress01/mkjail/14.3-RELEASE                   refcompressratio  2.09x     -
data01/freshports/jailed/nginx01                                         refcompressratio  1.00x     -
data01/freshports/jailed/nginx01/var                                     refcompressratio  1.00x     -
data01/freshports/jailed/nginx01/var/db                                  refcompressratio  1.00x     -
data01/freshports/jailed/nginx01/var/db/freshports                       refcompressratio  1.00x     -
data01/freshports/jailed/nginx01/var/db/freshports/cache                 refcompressratio  1.00x     -
data01/freshports/jailed/nginx01/var/db/freshports/cache/categories      refcompressratio  3.26x     -
data01/freshports/jailed/nginx01/var/db/freshports/cache/commits         refcompressratio  1.01x     -
data01/freshports/jailed/nginx01/var/db/freshports/cache/daily           refcompressratio  5.23x     -
data01/freshports/jailed/nginx01/var/db/freshports/cache/general         refcompressratio  1.29x     -
data01/freshports/jailed/nginx01/var/db/freshports/cache/news            refcompressratio  3.24x     -
data01/freshports/jailed/nginx01/var/db/freshports/cache/packages        refcompressratio  1.00x     -
data01/freshports/jailed/nginx01/var/db/freshports/cache/pages           refcompressratio  1.00x     -
data01/freshports/jailed/nginx01/var/db/freshports/cache/ports           refcompressratio  3.80x     -
data01/freshports/jailed/nginx01/var/db/freshports/cache/spooling        refcompressratio  1.00x     -
data01/jails                                                             refcompressratio  1.00x     -
data01/jails/bw                                                          refcompressratio  1.58x     -
data01/jails/ingress01                                                   refcompressratio  2.14x     -
data01/jails/nginx01                                                     refcompressratio  1.27x     -
data01/jails/ns3                                                         refcompressratio  1.30x     -
data01/jails/perl540                                                     refcompressratio  1.91x     -
data01/jails/pg01                                                        refcompressratio  1.28x     -
data01/jails/proxy01                                                     refcompressratio  1.90x     -
data01/jails/svn                                                         refcompressratio  1.85x     -
data01/mkjail                                                            refcompressratio  1.00x     -
data01/mkjail/14.1-RELEASE                                               refcompressratio  2.04x     -
data01/mkjail/14.2-RELEASE                                               refcompressratio  2.04x     -
data01/mkjail/14.3-RELEASE                                               refcompressratio  2.09x     -
data01/mkjail/15.0-RELEASE                                               refcompressratio  2.02x     -
data01/pg01                                                              refcompressratio  1.00x     -
data01/pg01/dan                                                          refcompressratio  1.00x     -
data01/pg01/postgres                                                     refcompressratio  1.75x     -
data01/reserved                                                          refcompressratio  1.00x     -
zroot                                                                    refcompressratio  1.00x     -
zroot/ROOT                                                               refcompressratio  1.00x     -
zroot/ROOT/14.3-RELEASE-p8_2026-02-15_161344                             refcompressratio  1.18x     -
zroot/ROOT/15.0-RELEASE-p2_2026-02-15_162159                             refcompressratio  1.18x     -
zroot/ROOT/15.0-RELEASE-p3_2026-02-15_163239                             refcompressratio  1.18x     -
zroot/ROOT/before.15.0                                                   refcompressratio  1.18x     -
zroot/ROOT/default                                                       refcompressratio  1.17x     -
zroot/data                                                               refcompressratio  1.00x     -
zroot/data/home                                                          refcompressratio  2.71x     -
zroot/reserved                                                           refcompressratio  1.00x     -
zroot/tmp                                                                refcompressratio  1.00x     -
zroot/usr                                                                refcompressratio  1.00x     -
zroot/usr/src                                                            refcompressratio  1.00x     -
zroot/var                                                                refcompressratio  1.00x     -
zroot/var/audit                                                          refcompressratio  1.00x     -
zroot/var/crash                                                          refcompressratio  1.01x     -
zroot/var/log                                                            refcompressratio  4.51x     -
zroot/var/mail                                                           refcompressratio  1.00x     -
zroot/var/tmp                                                            refcompressratio  2.78x     -
[18:50 r720-02 dvl ~] % 

r730-01

[18:50 r730-01 dvl ~] % zfs get refcompressratio -t filesystem
NAME                                                                PROPERTY          VALUE     SOURCE
data01                                                              refcompressratio  1.00x     -
data01/jail_within_jail                                             refcompressratio  1.00x     -
data01/jail_within_jail/jails                                       refcompressratio  1.00x     -
data01/jail_within_jail/jails/freshports                            refcompressratio  2.14x     -
data01/mkjail                                                       refcompressratio  1.00x     -
data01/mkjail/14.3-RELEASE                                          refcompressratio  2.10x     -
data01/mkjail/15.0-RELEASE                                          refcompressratio  2.01x     -
data01/reserved                                                     refcompressratio  1.00x     -
data02                                                              refcompressratio  1.00x     -
data02/freshports                                                   refcompressratio  1.00x     -
data02/freshports/dev-ingress01                                     refcompressratio  1.00x     -
data02/freshports/dev-ingress01/dvl-src                             refcompressratio  1.36x     -
data02/freshports/dev-ingress01/freshports                          refcompressratio  2.28x     -
data02/freshports/dev-ingress01/freshports/cache                    refcompressratio  1.01x     -
data02/freshports/dev-ingress01/freshports/cache/html               refcompressratio  1.03x     -
data02/freshports/dev-ingress01/freshports/cache/spooling           refcompressratio  1.00x     -
data02/freshports/dev-ingress01/freshports/message-queues           refcompressratio  3.98x     -
data02/freshports/dev-ingress01/freshports/message-queues/archive   refcompressratio  1.96x     -
data02/freshports/dev-ingress01/ingress                             refcompressratio  1.00x     -
data02/freshports/dev-ingress01/ingress/latest_commits              refcompressratio  1.55x     -
data02/freshports/dev-ingress01/ingress/message-queues              refcompressratio  1.00x     -
data02/freshports/dev-ingress01/ingress/repos                       refcompressratio  1.00x     -
data02/freshports/dev-ingress01/ingress/repos/doc                   refcompressratio  1.32x     -
data02/freshports/dev-ingress01/ingress/repos/ports                 refcompressratio  1.20x     -
data02/freshports/dev-ingress01/ingress/repos/src                   refcompressratio  1.38x     -
data02/freshports/dev-ingress01/jails                               refcompressratio  1.00x     -
data02/freshports/dev-ingress01/jails/freshports                    refcompressratio  2.28x     -
data02/freshports/dev-ingress01/jails/freshports/ports              refcompressratio  1.21x     -
data02/freshports/dev-ingress01/modules                             refcompressratio  2.33x     -
data02/freshports/dev-ingress01/scripts                             refcompressratio  1.34x     -
data02/freshports/dev-nginx01                                       refcompressratio  1.00x     -
data02/freshports/dev-nginx01/www                                   refcompressratio  1.00x     -
data02/freshports/dev-nginx01/www/freshports                        refcompressratio  1.20x     -
data02/freshports/dev-nginx01/www/freshsource                       refcompressratio  1.41x     -
data02/freshports/dvl-ingress01                                     refcompressratio  1.00x     -
data02/freshports/dvl-ingress01/dvl-src                             refcompressratio  1.44x     -
data02/freshports/dvl-ingress01/freshports                          refcompressratio  1.00x     -
data02/freshports/dvl-ingress01/freshports/cache                    refcompressratio  1.00x     -
data02/freshports/dvl-ingress01/freshports/cache/html               refcompressratio  1.03x     -
data02/freshports/dvl-ingress01/freshports/cache/spooling           refcompressratio  1.00x     -
data02/freshports/dvl-ingress01/freshports/message-queues           refcompressratio  3.11x     -
data02/freshports/dvl-ingress01/freshports/message-queues/archive   refcompressratio  3.51x     -
data02/freshports/dvl-ingress01/ingress                             refcompressratio  1.00x     -
data02/freshports/dvl-ingress01/ingress/latest_commits              refcompressratio  1.00x     -
data02/freshports/dvl-ingress01/ingress/message-queues              refcompressratio  1.00x     -
data02/freshports/dvl-ingress01/ingress/repos                       refcompressratio  1.00x     -
data02/freshports/dvl-ingress01/ingress/repos/doc                   refcompressratio  1.32x     -
data02/freshports/dvl-ingress01/ingress/repos/ports                 refcompressratio  1.20x     -
data02/freshports/dvl-ingress01/ingress/repos/src                   refcompressratio  1.40x     -
data02/freshports/dvl-ingress01/jails                               refcompressratio  1.00x     -
data02/freshports/dvl-ingress01/jails/freshports                    refcompressratio  2.28x     -
data02/freshports/dvl-ingress01/jails/freshports/ports              refcompressratio  1.21x     -
data02/freshports/dvl-ingress01/modules                             refcompressratio  2.18x     -
data02/freshports/dvl-ingress01/scripts                             refcompressratio  1.37x     -
data02/freshports/dvl-nginx01                                       refcompressratio  1.00x     -
data02/freshports/dvl-nginx01/www                                   refcompressratio  1.00x     -
data02/freshports/dvl-nginx01/www/freshports                        refcompressratio  1.34x     -
data02/freshports/dvl-nginx01/www/freshsource                       refcompressratio  1.30x     -
data02/freshports/jailed                                            refcompressratio  1.00x     -
data02/freshports/jailed/dev-ingress01                              refcompressratio  1.00x     -
data02/freshports/jailed/dev-nginx01                                refcompressratio  1.00x     -
data02/freshports/jailed/dev-nginx01/cache                          refcompressratio  1.00x     -
data02/freshports/jailed/dev-nginx01/cache/categories               refcompressratio  6.19x     -
data02/freshports/jailed/dev-nginx01/cache/commits                  refcompressratio  1.00x     -
data02/freshports/jailed/dev-nginx01/cache/daily                    refcompressratio  6.34x     -
data02/freshports/jailed/dev-nginx01/cache/general                  refcompressratio  1.94x     -
data02/freshports/jailed/dev-nginx01/cache/news                     refcompressratio  3.11x     -
data02/freshports/jailed/dev-nginx01/cache/packages                 refcompressratio  3.17x     -
data02/freshports/jailed/dev-nginx01/cache/pages                    refcompressratio  1.00x     -
data02/freshports/jailed/dev-nginx01/cache/ports                    refcompressratio  3.83x     -
data02/freshports/jailed/dev-nginx01/cache/spooling                 refcompressratio  1.00x     -
data02/freshports/jailed/dvl-ingress01                              refcompressratio  1.00x     -
data02/freshports/jailed/dvl-ingress01/distfiles                    refcompressratio  1.00x     -
data02/freshports/jailed/dvl-nginx01                                refcompressratio  1.00x     -
data02/freshports/jailed/dvl-nginx01/cache                          refcompressratio  1.00x     -
data02/freshports/jailed/dvl-nginx01/cache/categories               refcompressratio  1.00x     -
data02/freshports/jailed/dvl-nginx01/cache/commits                  refcompressratio  1.00x     -
data02/freshports/jailed/dvl-nginx01/cache/daily                    refcompressratio  1.00x     -
data02/freshports/jailed/dvl-nginx01/cache/general                  refcompressratio  1.00x     -
data02/freshports/jailed/dvl-nginx01/cache/news                     refcompressratio  3.11x     -
data02/freshports/jailed/dvl-nginx01/cache/packages                 refcompressratio  1.00x     -
data02/freshports/jailed/dvl-nginx01/cache/pages                    refcompressratio  1.00x     -
data02/freshports/jailed/dvl-nginx01/cache/ports                    refcompressratio  1.07x     -
data02/freshports/jailed/dvl-nginx01/cache/spooling                 refcompressratio  1.00x     -
data02/freshports/jailed/dvl-nginx01/freshports                     refcompressratio  1.00x     -
data02/freshports/jailed/stage-ingress01                            refcompressratio  1.00x     -
data02/freshports/jailed/stage-ingress01/data                       refcompressratio  1.00x     -
data02/freshports/jailed/stage-nginx01                              refcompressratio  1.00x     -
data02/freshports/jailed/stage-nginx01/cache                        refcompressratio  4.15x     -
data02/freshports/jailed/stage-nginx01/cache/categories             refcompressratio  5.60x     -
data02/freshports/jailed/stage-nginx01/cache/commits                refcompressratio  1.00x     -
data02/freshports/jailed/stage-nginx01/cache/daily                  refcompressratio  6.41x     -
data02/freshports/jailed/stage-nginx01/cache/general                refcompressratio  2.40x     -
data02/freshports/jailed/stage-nginx01/cache/news                   refcompressratio  3.11x     -
data02/freshports/jailed/stage-nginx01/cache/packages               refcompressratio  3.23x     -
data02/freshports/jailed/stage-nginx01/cache/pages                  refcompressratio  1.00x     -
data02/freshports/jailed/stage-nginx01/cache/ports                  refcompressratio  3.80x     -
data02/freshports/jailed/stage-nginx01/cache/spooling               refcompressratio  1.00x     -
data02/freshports/jailed/test-ingress01                             refcompressratio  1.00x     -
data02/freshports/jailed/test-ingress01/data                        refcompressratio  1.00x     -
data02/freshports/jailed/test-nginx01                               refcompressratio  1.00x     -
data02/freshports/jailed/test-nginx01/cache                         refcompressratio  4.70x     -
data02/freshports/jailed/test-nginx01/cache/categories              refcompressratio  5.26x     -
data02/freshports/jailed/test-nginx01/cache/commits                 refcompressratio  1.00x     -
data02/freshports/jailed/test-nginx01/cache/daily                   refcompressratio  6.46x     -
data02/freshports/jailed/test-nginx01/cache/general                 refcompressratio  1.34x     -
data02/freshports/jailed/test-nginx01/cache/news                    refcompressratio  3.11x     -
data02/freshports/jailed/test-nginx01/cache/packages                refcompressratio  3.22x     -
data02/freshports/jailed/test-nginx01/cache/pages                   refcompressratio  1.00x     -
data02/freshports/jailed/test-nginx01/cache/ports                   refcompressratio  3.89x     -
data02/freshports/jailed/test-nginx01/cache/spooling                refcompressratio  1.00x     -
data02/freshports/stage-ingress01                                   refcompressratio  1.00x     -
data02/freshports/stage-ingress01/cache                             refcompressratio  1.00x     -
data02/freshports/stage-ingress01/cache/html                        refcompressratio  1.02x     -
data02/freshports/stage-ingress01/cache/spooling                    refcompressratio  1.00x     -
data02/freshports/stage-ingress01/freshports                        refcompressratio  1.00x     -
data02/freshports/stage-ingress01/freshports/archive                refcompressratio  2.03x     -
data02/freshports/stage-ingress01/freshports/message-queues         refcompressratio  4.31x     -
data02/freshports/stage-ingress01/ingress                           refcompressratio  1.00x     -
data02/freshports/stage-ingress01/ingress/latest_commits            refcompressratio  1.31x     -
data02/freshports/stage-ingress01/ingress/message-queues            refcompressratio  1.00x     -
data02/freshports/stage-ingress01/ingress/repos                     refcompressratio  1.28x     -
data02/freshports/stage-ingress01/jails                             refcompressratio  1.00x     -
data02/freshports/stage-ingress01/jails/freshports                  refcompressratio  2.28x     -
data02/freshports/stage-ingress01/ports                             refcompressratio  1.21x     -
data02/freshports/test-ingress01                                    refcompressratio  1.00x     -
data02/freshports/test-ingress01/freshports                         refcompressratio  2.28x     -
data02/freshports/test-ingress01/freshports/cache                   refcompressratio  1.00x     -
data02/freshports/test-ingress01/freshports/cache/html              refcompressratio  1.02x     -
data02/freshports/test-ingress01/freshports/cache/spooling          refcompressratio  1.00x     -
data02/freshports/test-ingress01/freshports/message-queues          refcompressratio  3.71x     -
data02/freshports/test-ingress01/freshports/message-queues/archive  refcompressratio  2.02x     -
data02/freshports/test-ingress01/ingress                            refcompressratio  1.00x     -
data02/freshports/test-ingress01/ingress/latest_commits             refcompressratio  1.31x     -
data02/freshports/test-ingress01/ingress/message-queues             refcompressratio  1.00x     -
data02/freshports/test-ingress01/ingress/repos                      refcompressratio  1.29x     -
data02/freshports/test-ingress01/jails                              refcompressratio  1.00x     -
data02/freshports/test-ingress01/jails/freshports                   refcompressratio  2.28x     -
data02/freshports/test-ingress01/jails/freshports/ports             refcompressratio  1.21x     -
data02/jails                                                        refcompressratio  1.00x     -
data02/jails/bacula                                                 refcompressratio  1.07x     -
data02/jails/bacula-sd-02                                           refcompressratio  1.48x     -
data02/jails/bacula-sd-03                                           refcompressratio  1.58x     -
data02/jails/besser                                                 refcompressratio  1.74x     -
data02/jails/certs                                                  refcompressratio  1.50x     -
data02/jails/certs-rsync                                            refcompressratio  1.48x     -
data02/jails/cliff2                                                 refcompressratio  1.50x     -
data02/jails/dev-ingress01                                          refcompressratio  1.94x     -
data02/jails/dev-nginx01                                            refcompressratio  1.82x     -
data02/jails/dns-hidden-master                                      refcompressratio  1.60x     -
data02/jails/dns1                                                   refcompressratio  1.61x     -
data02/jails/dvl-ingress01                                          refcompressratio  1.95x     -
data02/jails/dvl-nginx01                                            refcompressratio  2.04x     -
data02/jails/git                                                    refcompressratio  1.59x     -
data02/jails/jail_within_jail                                       refcompressratio  2.03x     -
data02/jails/mqtt01                                                 refcompressratio  1.82x     -
data02/jails/mydev                                                  refcompressratio  1.20x     -
data02/jails/mysql01                                                refcompressratio  4.34x     -
data02/jails/mysql02                                                refcompressratio  4.25x     -
data02/jails/mysql02.bad                                            refcompressratio  4.23x     -
data02/jails/mysql02.bad.part2                                      refcompressratio  4.29x     -
data02/jails/mysql02.bad.part3                                      refcompressratio  4.29x     -
data02/jails/nsnotify                                               refcompressratio  1.47x     -
data02/jails/pg01                                                   refcompressratio  1.28x     -
data02/jails/pg02                                                   refcompressratio  1.24x     -
data02/jails/pg03                                                   refcompressratio  2.57x     -
data02/jails/pkg01                                                  refcompressratio  1.52x     -
data02/jails/samdrucker                                             refcompressratio  1.60x     -
data02/jails/serpico                                                refcompressratio  1.51x     -
data02/jails/stage-ingress01                                        refcompressratio  2.50x     -
data02/jails/stage-nginx01                                          refcompressratio  2.09x     -
data02/jails/svn                                                    refcompressratio  1.14x     -
data02/jails/talos                                                  refcompressratio  1.47x     -
data02/jails/test-ingress01                                         refcompressratio  2.28x     -
data02/jails/test-nginx01                                           refcompressratio  2.28x     -
data02/jails/unifi01                                                refcompressratio  1.49x     -
data02/jails/webserver                                              refcompressratio  1.48x     -
data02/reserved                                                     refcompressratio  1.00x     -
data02/vm                                                           refcompressratio  1.69x     -
data02/vm/freebsd-test                                              refcompressratio  1.00x     -
data02/vm/hass                                                      refcompressratio  1.67x     -
data02/vm/home-assistant                                            refcompressratio  1.00x     -
data02/vm/myguest                                                   refcompressratio  4.90x     -
data03                                                              refcompressratio  1.00x     -
data03/acme-certs                                                   refcompressratio  1.00x     -
data03/acme-certs/certs                                             refcompressratio  1.14x     -
data03/acme-certs/certs-for-rsync                                   refcompressratio  1.01x     -
data03/dvl                                                          refcompressratio  1.00x     -
data03/jail_within_jail                                             refcompressratio  1.00x     -
data03/jail_within_jail/jails                                       refcompressratio  1.00x     -
data03/jail_within_jail/jails/freshports                            refcompressratio  1.00x     -
data03/librenms-rrd                                                 refcompressratio  2.30x     -
data03/pg01                                                         refcompressratio  1.00x     -
data03/pg01/freshports.dvl                                          refcompressratio  1.75x     -
data03/pg01/postgres                                                refcompressratio  1.74x     -
data03/pg02                                                         refcompressratio  1.00x     -
data03/pg02/postgres                                                refcompressratio  1.74x     -
data03/pg02/rsyncer                                                 refcompressratio  1.30x     -
data03/pg03                                                         refcompressratio  1.00x     -
data03/pg03/postgres                                                refcompressratio  1.20x     -
data03/pg03/rsyncer                                                 refcompressratio  1.00x     -
data03/poudriere                                                    refcompressratio  1.00x     -
data03/poudriere/ccache                                             refcompressratio  1.00x     -
data03/poudriere/ccache/ccache.13amd64                              refcompressratio  1.00x     -
data03/poudriere/ccache/ccache.amd64                                refcompressratio  1.00x     -
data03/poudriere/data                                               refcompressratio  4.41x     -
data03/poudriere/data/cache                                         refcompressratio  1.66x     -
data03/poudriere/data/cronjob-logs                                  refcompressratio  3.86x     -
data03/poudriere/data/packages                                      refcompressratio  1.00x     -
data03/poudriere/distfiles                                          refcompressratio  1.01x     -
data03/poudriere/jails                                              refcompressratio  1.00x     -
data03/poudriere/jails/143amd64                                     refcompressratio  2.02x     -
data03/poudriere/jails/150amd64                                     refcompressratio  1.98x     -
data03/poudriere/ports                                              refcompressratio  1.00x     -
data03/poudriere/ports/default                                      refcompressratio  1.19x     -
data03/poudriere/ports/main                                         refcompressratio  1.00x     -
data03/poudriere/ports/pgeu_system                                  refcompressratio  1.67x     -
data03/poudriere/ports/testing                                      refcompressratio  1.16x     -
data03/poudriere/test                                               refcompressratio  1.00x     -
data03/public                                                       refcompressratio  1.00x     -
data03/repos                                                        refcompressratio  1.00x     -
data03/repos/gitea                                                  refcompressratio  1.20x     -
data03/repos/subversion                                             refcompressratio  1.00x     -
data03/reserved                                                     refcompressratio  1.00x     -
data04                                                              refcompressratio  1.00x     -
data04/bacula                                                       refcompressratio  1.00x     -
data04/bacula/volumes                                               refcompressratio  1.00x     -
data04/bacula/volumes/DiffFile-03                                   refcompressratio  6.59x     -
data04/bacula/volumes/FullFile-03                                   refcompressratio  1.91x     -
data04/bacula/volumes/IncrFile-03                                   refcompressratio  9.77x     -
data04/bacula/working                                               refcompressratio  1.00x     -
data04/r720-02                                                      refcompressratio  1.00x     -
data04/r720-02/freebsd_releases                                     refcompressratio  1.00x     -
data04/r720-02/jails                                                refcompressratio  1.00x     -
data04/r720-02/jails/svn                                            refcompressratio  2.33x     -
zroot                                                               refcompressratio  1.00x     -
zroot/ROOT                                                          refcompressratio  1.00x     -
zroot/ROOT/14.3-RELEASE-p8_2026-02-08_213813                        refcompressratio  1.26x     -
zroot/ROOT/15.0-RELEASE-p2_2026-02-08_215115                        refcompressratio  1.26x     -
zroot/ROOT/15.0-RELEASE-p2_2026-02-08_222611                        refcompressratio  1.26x     -
zroot/ROOT/15.0-RELEASE-p2_2026-02-11_124505                        refcompressratio  1.25x     -
zroot/ROOT/before.15.0                                              refcompressratio  1.26x     -
zroot/ROOT/before.15.0-RELEASE-p3                                   refcompressratio  1.25x     -
zroot/ROOT/default                                                  refcompressratio  1.24x     -
zroot/reserved                                                      refcompressratio  1.00x     -
zroot/tmp                                                           refcompressratio  1.00x     -
zroot/usr                                                           refcompressratio  1.00x     -
zroot/usr/home                                                      refcompressratio  2.41x     -
zroot/usr/ports                                                     refcompressratio  1.00x     -
zroot/usr/src                                                       refcompressratio  1.00x     -
zroot/var                                                           refcompressratio  1.00x     -
zroot/var/audit                                                     refcompressratio  1.00x     -
zroot/var/crash                                                     refcompressratio  1.01x     -
zroot/var/log                                                       refcompressratio  10.64x    -
zroot/var/mail                                                      refcompressratio  1.00x     -
zroot/var/tmp                                                       refcompressratio  4.49x     -

r730-03

[18:50 r730-03 dvl ~] % zfs get refcompressratio -t filesystem
NAME                                          PROPERTY          VALUE     SOURCE
data01                                        refcompressratio  1.00x     -
data01/bacula-volumes                         refcompressratio  1.00x     -
data01/bacula-volumes/DiffFile                refcompressratio  4.36x     -
data01/bacula-volumes/FullFile                refcompressratio  1.64x     -
data01/bacula-volumes/FullFileNoNextPool      refcompressratio  1.68x     -
data01/bacula-volumes/IncrFile                refcompressratio  5.55x     -
data01/dbclone.backups.rsyncer                refcompressratio  1.65x     -
data01/dbclone.postgres                       refcompressratio  1.28x     -
data01/jails                                  refcompressratio  1.00x     -
data01/jails/ansible                          refcompressratio  1.09x     -
data01/jails/bacula-sd-04                     refcompressratio  1.36x     -
data01/jails/cliff1                           refcompressratio  1.08x     -
data01/jails/dbclone                          refcompressratio  1.45x     -
data01/jails/dns2                             refcompressratio  1.22x     -
data01/jails/empty                            refcompressratio  1.21x     -
data01/jails/fileserver                       refcompressratio  1.23x     -
data01/jails/fruity-int                       refcompressratio  1.09x     -
data01/jails/graylog                          refcompressratio  1.32x     -
data01/jails/perl540                          refcompressratio  1.60x     -
data01/jails/tm                               refcompressratio  1.34x     -
data01/mkjail                                 refcompressratio  1.00x     -
data01/mkjail/14.1-RELEASE                    refcompressratio  2.04x     -
data01/mkjail/14.2-RELEASE                    refcompressratio  2.04x     -
data01/mkjail/14.3-RELEASE                    refcompressratio  2.10x     -
data01/mkjail/15.0-RELEASE                    refcompressratio  2.02x     -
data01/mkjail/releases                        refcompressratio  1.00x     -
data01/reserved                               refcompressratio  1.00x     -
data01/samba                                  refcompressratio  1.00x     -
data01/samba/dvl                              refcompressratio  1.24x     -
data01/samba/public                           refcompressratio  1.12x     -
data01/samba/transfer                         refcompressratio  1.01x     -
data01/samba/video                            refcompressratio  1.02x     -
data01/snapshots                              refcompressratio  1.00x     -
data01/snapshots/deleting                     refcompressratio  1.00x     -
data01/snapshots/homeassistant-r730-01        refcompressratio  1.50x     -
data01/syncthing                              refcompressratio  1.08x     -
data01/timemachine                            refcompressratio  1.00x     -
data01/timemachine/dvl-air01                  refcompressratio  1.34x     -
data01/timemachine/dvl-pro02                  refcompressratio  1.00x     -
data01/timemachine/dvl-pro03                  refcompressratio  1.00x     -
data01/timemachine/dvl-pro04                  refcompressratio  1.00x     -
data01/timemachine/dvl-pro05                  refcompressratio  1.00x     -
data01/torrents                               refcompressratio  1.01x     -
data01/torrents/annas                         refcompressratio  1.00x     -
zroot                                         refcompressratio  1.00x     -
zroot/ROOT                                    refcompressratio  1.00x     -
zroot/ROOT/14.3-RELEASE-p8_2026-02-15_141221  refcompressratio  1.28x     -
zroot/ROOT/15.0-RELEASE-p2_2026-02-15_142220  refcompressratio  1.29x     -
zroot/ROOT/15.0-RELEASE-p3_2026-02-15_142930  refcompressratio  1.28x     -
zroot/ROOT/before.15.0                        refcompressratio  1.30x     -
zroot/ROOT/default                            refcompressratio  1.29x     -
zroot/data                                    refcompressratio  1.00x     -
zroot/data/ansible                            refcompressratio  1.49x     -
zroot/mkjail                                  refcompressratio  1.00x     -
zroot/reserved                                refcompressratio  1.00x     -
zroot/tmp                                     refcompressratio  4.82x     -
zroot/usr                                     refcompressratio  1.00x     -
zroot/usr/home                                refcompressratio  4.60x     -
zroot/usr/ports                               refcompressratio  1.00x     -
zroot/usr/src                                 refcompressratio  1.00x     -
zroot/var                                     refcompressratio  1.00x     -
zroot/var/audit                               refcompressratio  1.00x     -
zroot/var/crash                               refcompressratio  1.01x     -
zroot/var/log                                 refcompressratio  5.94x     -
zroot/var/mail                                refcompressratio  1.00x     -
zroot/var/tmp                                 refcompressratio  4.29x     -

tallboy

[18:50 tallboy dvl ~] % zfs get refcompressratio -t filesystem
NAME                                              PROPERTY          VALUE     SOURCE
system                                            refcompressratio  1.00x     -
system/bootenv                                    refcompressratio  1.00x     -
system/bootenv/14.3-RELEASE-p8_2026-02-14_185306  refcompressratio  1.05x     -
system/bootenv/15.0-RELEASE-p2_2026-02-14_191134  refcompressratio  1.05x     -
system/bootenv/15.0-RELEASE-p3_2026-02-14_194858  refcompressratio  1.05x     -
system/bootenv/before.15.0                        refcompressratio  1.05x     -
system/bootenv/default                            refcompressratio  1.05x     -
system/data                                       refcompressratio  1.00x     -
system/data/papers                                refcompressratio  1.00x     -
system/data/rsyncer-backups                       refcompressratio  1.08x     -
system/jails                                      refcompressratio  1.00x     -
system/jails/ns1                                  refcompressratio  1.23x     -
system/jails/tallboy-mqtt                         refcompressratio  1.22x     -
system/jails/wikis                                refcompressratio  2.48x     -
system/mkjail                                     refcompressratio  1.00x     -
system/mkjail/13.2-RELEASE                        refcompressratio  2.06x     -
system/mkjail/14.0-RELEASE                        refcompressratio  2.05x     -
system/mkjail/14.1-RELEASE                        refcompressratio  2.04x     -
system/mkjail/14.2-RELEASE                        refcompressratio  2.05x     -
system/mkjail/14.3-RELEASE                        refcompressratio  2.10x     -
system/mkjail/15.0-RELEASE                        refcompressratio  2.02x     -
system/reserved                                   refcompressratio  1.00x     -
system/tmp                                        refcompressratio  1.00x     -
system/usr                                        refcompressratio  1.00x     -
system/usr/home                                   refcompressratio  1.39x     -
system/usr/local                                  refcompressratio  1.89x     -
system/usr/obj                                    refcompressratio  1.00x     -
system/usr/ports                                  refcompressratio  1.00x     -
system/usr/src                                    refcompressratio  1.01x     -
system/var                                        refcompressratio  1.02x     -
system/var/crash                                  refcompressratio  1.01x     -
system/var/db                                     refcompressratio  1.02x     -
system/var/db/pkg                                 refcompressratio  1.91x     -
system/var/empty                                  refcompressratio  1.00x     -
system/var/log                                    refcompressratio  5.00x     -
system/var/mail                                   refcompressratio  1.00x     -
system/var/run                                    refcompressratio  1.22x     -
system/var/tmp                                    refcompressratio  2.36x     -

x8dtu

Last login: Sun Feb 15 20:51:54 2026 from 108.52.204.170
[18:50 x8dtu dvl ~] % zfs get refcompressratio -t filesystem
NAME                                                               PROPERTY          VALUE     SOURCE
data                                                               refcompressratio  1.00x     -
data/backups                                                       refcompressratio  1.00x     -
data/backups/rsyncer                                               refcompressratio  1.00x     -
data/backups/rsyncer/backups                                       refcompressratio  1.00x     -
data/backups/rsyncer/backups/Bacula                                refcompressratio  3.05x     -
data/backups/rsyncer/backups/bacula-database                       refcompressratio  2.27x     -
data/freshports                                                    refcompressratio  1.00x     -
data/freshports/ingress01                                          refcompressratio  1.00x     -
data/freshports/ingress01/ports                                    refcompressratio  1.20x     -
data/freshports/ingress01/var                                      refcompressratio  1.00x     -
data/freshports/ingress01/var/db                                   refcompressratio  1.00x     -
data/freshports/ingress01/var/db/freshports                        refcompressratio  1.00x     -
data/freshports/ingress01/var/db/freshports/cache                  refcompressratio  1.00x     -
data/freshports/ingress01/var/db/freshports/cache/html             refcompressratio  1.10x     -
data/freshports/ingress01/var/db/freshports/cache/spooling         refcompressratio  1.00x     -
data/freshports/ingress01/var/db/freshports/message-queues         refcompressratio  1.67x     -
data/freshports/ingress01/var/db/freshports/repos                  refcompressratio  1.00x     -
data/freshports/ingress01/var/db/ingress                           refcompressratio  1.00x     -
data/freshports/ingress01/var/db/ingress/message-queues            refcompressratio  1.00x     -
data/freshports/ingress01/var/db/ingress/repos                     refcompressratio  1.24x     -
data/freshports/jailed                                             refcompressratio  1.00x     -
data/freshports/jailed/ingress01                                   refcompressratio  1.00x     -
data/freshports/jailed/ingress01/jails                             refcompressratio  1.00x     -
data/freshports/jailed/ingress01/jails/freshports                  refcompressratio  1.98x     -
data/freshports/jailed/ingress01/jails/freshports.14.3             refcompressratio  2.02x     -
data/freshports/jailed/ingress01/mkjail                            refcompressratio  1.00x     -
data/freshports/jailed/ingress01/mkjail/14.3-RELEASE               refcompressratio  2.10x     -
data/freshports/jailed/ingress01/mkjail/15.0-RELEASE               refcompressratio  2.02x     -
data/freshports/jailed/nginx01                                     refcompressratio  1.00x     -
data/freshports/jailed/nginx01/var                                 refcompressratio  1.00x     -
data/freshports/jailed/nginx01/var/db                              refcompressratio  1.00x     -
data/freshports/jailed/nginx01/var/db/freshports                   refcompressratio  1.00x     -
data/freshports/jailed/nginx01/var/db/freshports/cache             refcompressratio  1.98x     -
data/freshports/jailed/nginx01/var/db/freshports/cache/categories  refcompressratio  9.59x     -
data/freshports/jailed/nginx01/var/db/freshports/cache/commits     refcompressratio  1.00x     -
data/freshports/jailed/nginx01/var/db/freshports/cache/daily       refcompressratio  6.55x     -
data/freshports/jailed/nginx01/var/db/freshports/cache/general     refcompressratio  1.46x     -
data/freshports/jailed/nginx01/var/db/freshports/cache/news        refcompressratio  1.00x     -
data/freshports/jailed/nginx01/var/db/freshports/cache/packages    refcompressratio  3.36x     -
data/freshports/jailed/nginx01/var/db/freshports/cache/pages       refcompressratio  1.00x     -
data/freshports/jailed/nginx01/var/db/freshports/cache/ports       refcompressratio  3.82x     -
data/freshports/jailed/nginx01/var/db/freshports/cache/spooling    refcompressratio  1.00x     -
data/freshports/nginx01                                            refcompressratio  1.00x     -
data/freshports/nginx01/var                                        refcompressratio  1.00x     -
data/freshports/nginx01/var/db                                     refcompressratio  1.00x     -
data/freshports/nginx01/var/db/freshports                          refcompressratio  1.00x     -
data/freshports/nginx01/var/db/freshports/cache                    refcompressratio  1.00x     -
data/home                                                          refcompressratio  2.09x     -
data/jails                                                         refcompressratio  1.00x     -
data/jails/ingress01                                               refcompressratio  2.01x     -
data/jails/nginx01                                                 refcompressratio  1.69x     -
data/jails/perl540                                                 refcompressratio  1.61x     -
data/jails/pg01                                                    refcompressratio  1.49x     -
data/jails/svn                                                     refcompressratio  3.50x     -
data/mkjail                                                        refcompressratio  1.00x     -
data/mkjail/14.1-RELEASE                                           refcompressratio  2.04x     -
data/mkjail/14.2-RELEASE                                           refcompressratio  2.04x     -
data/mkjail/14.3-RELEASE                                           refcompressratio  2.09x     -
data/mkjail/15.0-RELEASE                                           refcompressratio  2.02x     -
data/reserved                                                      refcompressratio  1.00x     -
zroot                                                              refcompressratio  1.00x     -
zroot/ROOT                                                         refcompressratio  1.00x     -
zroot/ROOT/14.2-RELEASE-p5_2025-09-11_111054                       refcompressratio  1.13x     -
zroot/ROOT/14.3-RELEASE-p2_2025-09-11_111952                       refcompressratio  1.14x     -
zroot/ROOT/14.3-RELEASE-p2_2025-09-17_113324                       refcompressratio  1.14x     -
zroot/ROOT/14.3-RELEASE-p3_2025-10-24_111135                       refcompressratio  1.13x     -
zroot/ROOT/14.3-RELEASE-p5_2025-11-28_233743                       refcompressratio  1.13x     -
zroot/ROOT/14.3-RELEASE-p6_2025-12-17_173847                       refcompressratio  1.13x     -
zroot/ROOT/14.3-RELEASE-p7_2026-01-27_225124                       refcompressratio  1.13x     -
zroot/ROOT/14.3-RELEASE-p8_2026-02-15_161700                       refcompressratio  1.13x     -
zroot/ROOT/15.0-RELEASE-p2_2026-02-15_162218                       refcompressratio  1.13x     -
zroot/ROOT/15.0-RELEASE-p3_2026-02-15_163220                       refcompressratio  1.13x     -
zroot/ROOT/default                                                 refcompressratio  1.12x     -
zroot/freebsd_releases                                             refcompressratio  1.00x     -
zroot/freshports                                                   refcompressratio  1.00x     -
zroot/freshports/pg01                                              refcompressratio  1.00x     -
zroot/freshports/pg01/postgres                                     refcompressratio  2.76x     -
zroot/reserved                                                     refcompressratio  1.00x     -
zroot/tmp                                                          refcompressratio  1.00x     -
zroot/usr                                                          refcompressratio  1.00x     -
zroot/usr/src                                                      refcompressratio  1.00x     -
zroot/var                                                          refcompressratio  1.00x     -
zroot/var/audit                                                    refcompressratio  1.00x     -
zroot/var/crash                                                    refcompressratio  4.76x     -
zroot/var/log                                                      refcompressratio  7.36x     -
zroot/var/mail                                                     refcompressratio  1.00x     -
zroot/var/tmp                                                      refcompressratio  2.58x     -

zuul

[18:50 zuul dvl ~] % zfs get refcompressratio -t filesystem
NAME                                              PROPERTY          VALUE     SOURCE
system                                            refcompressratio  1.00x     -
system/bootenv                                    refcompressratio  1.00x     -
system/bootenv/14.3-RELEASE-p8_2026-02-14_212609  refcompressratio  2.11x     -
system/bootenv/15.0-RELEASE-p2_2026-02-14_215012  refcompressratio  2.11x     -
system/bootenv/15.0-RELEASE-p3_2026-02-14_221709  refcompressratio  2.08x     -
system/bootenv/before.15.0                        refcompressratio  2.11x     -
system/bootenv/default                            refcompressratio  2.09x     -
system/data                                       refcompressratio  1.00x     -
system/jails                                      refcompressratio  1.00x     -
system/jails/beta_bsdcan                          refcompressratio  1.25x     -
system/jails/beta_pgcon                           refcompressratio  1.15x     -
system/jails/bsdcan                               refcompressratio  1.24x     -
system/jails/dumper                               refcompressratio  2.13x     -
system/jails/mailman                              refcompressratio  1.22x     -
system/jails/mysql                                refcompressratio  2.22x     -
system/jails/ns2                                  refcompressratio  1.24x     -
system/jails/pgcon                                refcompressratio  1.24x     -
system/jails/svn_bsdcan                           refcompressratio  1.09x     -
system/jails/svn_pgcon                            refcompressratio  1.08x     -
system/jails/webs01                               refcompressratio  1.08x     -
system/jails/webs02                               refcompressratio  1.37x     -
system/jails/znc                                  refcompressratio  1.37x     -
system/jails/zuul_pg01                            refcompressratio  1.36x     -
system/jails/zuul_pg02                            refcompressratio  1.27x     -
system/mkjail                                     refcompressratio  1.00x     -
system/mkjail/14.1-RELEASE                        refcompressratio  2.04x     -
system/mkjail/14.2-RELEASE                        refcompressratio  2.04x     -
system/mkjail/14.3-RELEASE                        refcompressratio  2.10x     -
system/mkjail/15.0-RELEASE                        refcompressratio  2.02x     -
system/reserved                                   refcompressratio  1.00x     -
system/tmp                                        refcompressratio  1.00x     -
system/usr                                        refcompressratio  1.00x     -
system/usr/home                                   refcompressratio  1.64x     -
system/usr/home/rsyncer                           refcompressratio  2.84x     -
system/usr/local                                  refcompressratio  1.93x     -
system/usr/obj                                    refcompressratio  1.00x     -
system/usr/src                                    refcompressratio  1.00x     -
system/var                                        refcompressratio  1.03x     -
system/var/crash                                  refcompressratio  1.01x     -
system/var/db                                     refcompressratio  1.05x     -
system/var/db/pkg                                 refcompressratio  2.12x     -
system/var/empty                                  refcompressratio  1.00x     -
system/var/log                                    refcompressratio  5.48x     -
system/var/mail                                   refcompressratio  1.00x     -
system/var/run                                    refcompressratio  1.40x     -
system/var/tmp                                    refcompressratio  1.71x     -
system/wordpress                                  refcompressratio  1.00x     -
[18:50 zuul dvl ~] % 
Top

Valuable News – 2026/02/16

Post by Vermaden via 𝚟𝚎𝚛𝚖𝚊𝚍𝚎𝚗 »

The Valuable News weekly series is dedicated to provide summary about news, articles and other interesting stuff mostly but not always related to the UNIX/BSD/Linux systems. Whenever I stumble upon something worth mentioning on the Internet I just put it here.

Today the amount information that we get using various information streams is at massive overload. Thus one needs to focus only on what is important without the need to grep(1) the Internet everyday. Hence the idea of providing such information ‘bulk’ as I already do that grep(1).

The Usual Suspects section at the end is permanent and have links to other sites with interesting UNIX/BSD/Linux news.

Past releases are available at the dedicated NEWS page.

UNIX

Addressing XLibre Change and GhostBSD Future.
https://ericbsd.com/addressing-xlibre-change-and-ghostbsd-future.html

My GUI Toolkit and Desktop Environment – Built from Scratch for FreeBSD.
https://forums.freebsd.org/threads/gui-desktop-built-for-freebsd.101567/

FreeBSD Git Weekly: 2026-02-02 to 2026-02-08.
https://freebsd-git-weekly.tarsnap.net/2026-02-02.html

OpenClaw with smolBSD with Dockerfile.
https://github.com/NetBSDfr/smolBSD/blob/main/dockerfiles/Dockerfile.clawd

Rustless WD-40 Git Fork.
https://github.com/Libre-WD-40/git

Adding Fediverse Comments to Pelican Blog.
https://blog.hofstede.it/adding-fediverse-comments-to-a-pelican-blog/

FreeBSD Home NAS – Part 10 – Monitoring with Victoria Metrics and Grafana.
https://rtfm.co.ua/en/freebsd-home-nas-part-10-monitoring-with-victoriametrics-and-grafana/

FreeBSD Home NAS – Part 11 – Extended Monitoring with Additional Exporters.
https://rtfm.co.ua/en/freebsd-home-nas-part-11-extended-monitoring-with-additional-exporters/

LLDB Improvements on FreeBSD.
https://lists.freebsd.org/archives/freebsd-hackers/2026-February/005757.html

UFS2Tool FreeBSD UFS1/2 Filesystem Manager for Windows.
https://github.com/SvenGDK/UFS2Tool

Production Ready OpenClaw Deployment Using FreeBSD VNET Jails and socat(1) Forwarding and ZFS Storage.
https://github.com/KLD997/FreeClaw

AMD CPPC cpufreq(4) Driver for Zen 2+ CPUs.
https://lists.freebsd.org/archives/freebsd-hackers/2026-February/005764.html

FreeBSD wolfCrypt Kernel Module Support.
https://wolfssl.com/wolfcrypt-freebsd-kernel-module-support/

FreeBSD Jail Memory Metrics.
https://blog.cabroneria.com/bits/0010_freebsd_per_jail_memory_metrics/

WolfSSL Sucks Too – So Now What?
https://blog.feld.me/posts/2026/02/wolfssl-sucks-too/

C64UX is Unix Inspired Shell and RAM Filesystem for Commodore 64.
https://github.com/ascarola/c64ux

Update and Cleanup Packages with Ansible on OpenBSD.
https://x61.sh/log/2026/02/12022026185942-update_all_ansible.html

Latest GhostBSD-26.1-R15.0p2 ISO Artifact.
https://ci.ghostbsd.org/jenkins/job/unstable/job/Verify%20The%20ISO%20Build%20With%20Unstable%20Packages/164/

RHEL on ZFS Root: Unholy Experiment.
https://blog.hofstede.it/rhel-on-zfs-root-an-unholy-experiment/

FreeBSD 14.4-BETA2 Now Available.
https://lists.freebsd.org/archives/freebsd-stable/2026-February/003848.html

NomadBSD: Persistent FreeBSD Live System.
https://privacylife.info/nomadbsd-persistent-freebsd-live-system/

Tailscale Exit Node on FreeBSD.
https://conradresearch.com/articles/tailscale-exit-node-on-freebsd

One Too Many Words on AT&T $2000 Korn Shell and Other Usenet Topics. [2025]
https://blog.gabornyeki.com/2025-12-usenet/

FFS Backup.
https://eradman.com/posts/ffs-backup.html

WireGuard and NFS. [2025]
https://eradman.com/posts/wireguard-nfs.html

AWK Programming. [2025]
https://eradman.com/posts/awk-programming.html

ZFS Quickstart. [2025]
https://eradman.com/posts/zfs-quickstart.html

Bhyve and iPXE. [2025]
https://eradman.com/posts/bhyve-ipxe.html

OpenBSD Workstation Notes. [2025]
https://eradman.com/posts/openbsd-workstation.html

OpenBSD VPS Installation. [2025]
https://eradman.com/posts/openbsd-vps-installation.html

Automated FreeBSD Install.
https://eradman.com/posts/automated-freebsd-install.html

Loadbars Resurrected: From Perl to Go After 15 Years.
https://foo.zone/gemfeed/2026-02-15-loadbars-resurrected-from-perl-to-go.html

Undo in vi(1) and Its Successors and My Views on Mess.
https://utcc.utoronto.ca/~cks/space/blog/unix/ViUndoMyViews

Intel SST Audio Driver for FreeBSD.
https://github.com/spagu/acpi_intel_sst

UNIX/Audio/Video

FreeBSD in 2026: Thriving or Dying?
https://yout-ube.com/watch?v=JRJBsb1mtIs

Little FreeBSD Stress is Good for You.
https://yout-ube.com/watch?v=kcO9naDQPrI

Jails on FreeBSD.
https://yout-ube.com/watch?v=nsgIJl5VKpg

2026-02-10 Jail/Zones Production User Call.
https://yout-ube.com/watch?v=Cg3Tr4wOTKo

2026-02-12 Bhyve Production User Call.
https://yout-ube.com/watch?v=hf4tNsDoLas

BSD Now 650: Korn Chips.
https://www.bsdnow.tv/650

Hardware

Backblaze Drive Stats for 2025.
https://backblaze.com/blog/backblaze-drive-stats-for-2025/

TechPaula/LT6502: 6502 Based Laptop Design.
https://github.com/TechPaula/LT6502

Acer and ASUS Banned from Selling PCs/Laptops in Germany Following Nokia HEVC/H.265 Codec Patent Ruling Bullshit.
https://videocardz.com/newz/acer-and-asus-are-now-banned-from-selling-pcs-and-laptops-in-germany-following-nokia-hevc-patent-ruling

How is Data Stored?
https://makingsoftware.com/chapters/how-is-data-stored

Removing BIOS Administrator Password on ThinkPad Takes Timing.
https://hackaday.com/2026/02/15/removing-the-bios-administrator-password-on-a-thinkpad-takes-timing/

Life

Audiophiles Can Not Differentiate Audio Signals Sent Through Copper/Banana/Mud in Blind Test.
https://headphonesty.com/2026/01/audiophiles-fail-copper-banana-mud-blind-test/

Other

SteamOS on ThinkPad P14s Gen 4 (AMD) is Quite Nice.
https://ounapuu.ee/posts/2026/02/09/year-of-the-linux-desktop/

Usual Suspects

BSD Weekly.
https://bsdweekly.com/

DiscoverBSD.
https://discoverbsd.com/

BSDSec.
https://bsdsec.net/

DragonFly BSD Digest.
https://dragonflydigest.com/

FreeBSD Patch Level Table.
https://bokut.in/freebsd-patch-level-table/

FreeBSD End of Life Date.
https://endoflife.date/freebsd

Phoronix BSD News Archives.
https://phoronix.com/linux/BSD

OpenBSD Journal.
https://undeadly.org/

Call for Testing.
https://callfortesting.org/

Call for Testing – Production Users Call.
https://youtube.com/@callfortesting/videos

BSD Now Weekly Podcast.
https://www.bsdnow.tv/

Nixers Newsletter.
https://newsletter.nixers.net/entries.php

BSD Cafe Journal.
https://journal.bsd.cafe/

DragonFly BSD Digest – Lazy Reading – In Other BSDs.
https://dragonflydigest.com

BSDTV.
https://bsky.app/profile/bsdtv.bsky.social

FreeBSD Git Weekly.
https://freebsd-git-weekly.tarsnap.net/

FreeBSD Meetings.
https://youtube.com/@freebsdmeetings

BSDJedi.
https://youtube.com/@BSDJedi/videos

RoboNuggie.
https://youtube.com/@RoboNuggie/videos

GaryHTech.
https://youtube.com/@GaryHTech/videos

Sheridan Computers.
https://youtube.com/@sheridans/videos

82MHz.
https://82mhz.net/

EOF
Top