So, what's your Linux week been like?

I love playing with old hardware and linux. It’s fun but the challenge is usually finding any semblance of documentation for some of these devices and their various quirks. This is especially true for Mac computers running linux so I finally took some time to start documenting the issues I’ve had post installation. I’m not a developer so I feel like documentation is someplace I can actually contribute.

finally my github might get some real use!

My weekly back-up of my OpenZFS datasets in a new way. Two computers are involved my 2019 Ryzen 3 2200G desktop ($349) and my 2003 Pentium 4 HT backup server (only leftovers, so $0). Note that the Pentium only has 2 cables connected: Power and Ethernet (1Gbps). The screens of both computers, running Ubuntu 22.04 LTS and FreeBSD 13.1, are perfectly integrated:

From the Ryzen you can see the Ubuntu dock on the left; the Ubuntu top bar and the Conky read-out on the right.
All the other stuff is from the Pentium 4 HT and that includes the wall paper and the Conky read-out on the left.
You should give special attention to the lower part of the Ubuntu dock below the separator, there you see; the icon for the Terminal of FreeBSD; a Windows XP VM Icon and the standard icons (Trash and Menu). Also the Ubuntu topbar contain the indicator for the xfce terminal running on FreeBSD in the Pentium 4 :slight_smile:

During the backup I run the XP VM to play my music with WoW and TrueBass effects :slight_smile:

I get the configuration by starting the connection as follows: ssh -X user@ip-address and afterwards I start xfce on FreeBSD as follows: startxfce4. Afterwards I start the scripts for the OpenZFS backup process (send | ssh receive). That process is running and that explains the 93% load on one Pentium CPU thread, limiting the transfer speed to ~24MB/s (~200Mbps).
That limitation is obvious looking at the CPU Passmark ratings: The 2019 Ryzen 3 2200G has 6693 points and the 2003 Pentium 4 HT has 262 points. One of the 2 cheapest and slowest Ryzen CPU is 25x faster than this Pentium 4 HT.

In the past I used Remmina, but after the upgrade from FreeBSD 13.0 to 13.1 it stopped working. I decided to sort it out later and to monitor the process with “ssh” and just for fun I started XFCE with this for me amazing result. Last week I decided to keep it. I used it now for the 2nd time and will be using it from now on, also because I gained ~10% in transmission speed, due to a more effective CPU usage.

I have experience as HW & SW developer from 17-3-69 till 1-1-11, but this result really caught me by surprise. The Ubuntu/Ryzen screen is nicely integrated with the FreeBSD/Pentium screen, while FreeBSD uses X and Ubuntu uses Wayland.

Power EXTREEEEEEEEEEEEEEEEME :bangbang: @MichaelTunnell @dasgeek

This week I’m installing Pi-hole for an org that I belong to. This org does some unusual blacklisting and whitelisting of domains, and sometimes even subdomains, and the blacklisting/whitelisting applies to only some of the computers in the LAN, not all of them.

My Pi-hole docker container has been deployed about a week now (53 client machines/gadgets on my LAN here), and I’m surprised at how much junk is getting blocked! 5% of all DNS queries today, a new high:


Dear Pi-hole devs, thanks for your awesome software!

1 Like

Employed my Raspberry Pi 1 B+ as a print server in the last few weeks. It’s working great for Windows, Linux and Android devices. Although I’m getting this annoying “Bad Request” error when connecting to CUPS via hostname rather than IP so I’m not sure what that’s about.

I’ve been doing a bit of looking at self hosting a push notification provider for my de-Google Android phone. Seems like another task for my Nextcloud server once that’s set up again after moving. And I’ll probably be switching over my LineageOS to Lineage with signature spoofing so I can install microG as well. Should be interesting!

EDIT: I spoke too soon. The print server suddenly doesn’t print a dang thing. Using a Brother HL-L2320D that normally wouldn’t have any network capabilities. Worked perfect for a week now no pages print even though CUPS reports the job completes and the status light on the printer blinks. Ooof!

Just started a new job at a VFX studio as SysAdmin. The environment is primarily macOS & Linux. Compared to my old job as IT Analyst for a hotel, I’ve never felt so in control. Before, I had to call vendors if I need something fixed, or go through a lot of red tape. Now, I can make those changes myself, and can always consult my IT teammates if I need any help. And I’m not just the IT of 1 location, as I’m also supporting other offices.


Well done.

1 Like

Congrats. Spend your free time reading the man pages. Report it as training and research. Reap the harvest.


This week I’ve been connecting to a remote site over OpenVPN, and testing out which methods of file access of a (Linux OpenMediaVault) fileserver are fastest. I needed to do common things like browse large folder structures for interesting files, download files in bulk, search for files remotely, and estimate total folder sizes (recursively totaling up all sub-folders as well).

Here are my findings, after several hours of tinkering:

  • Syncthing is the all-around fastest, if you have write access over a (very large) remote folder you’d like to copy in bulk (Syncthing needs to create a zero-size hidden file in any remote folder you want to sync). Synthing is especially fast when you have many tiny files to copy, like a bunch of ebooks, or smallish documents/spreadsheets. But be warned there is a considerable learning curve to Syncthing; best used by geeks only. Not for “normies”. :wink:
  • Filezilla is great for fast browsing around in a huge folder structure. FTP-over-TLS is as blazing fast as Syncthing for file transfers, but chokes as soon as soon as there are any Unicode chars in the filenames/folder names (potential fix here for the proftpd server of OpenMediaVault). Filezilla can also do SFTP, which has no Unicode problems - very peppy for folder browsing - but slower (than Syncthing or FTP) for larger file transfers.
  • For remotely searching for files, nothing beats just SSH’ing into the remote server and using fdfind on the command line.
  • For remotely estimating total folder sizes, a very fast and fun ncurses-based folder-size calculating app is “ncdu”. Again, SSH in to use it.
  • SMB/CIFS browsing/previewing/downloading/searching/filesize totallings always sucked very badly for speed, over OpenVPN. :-1: Avoid!

If you just want one app to do it all, Filezilla wins as all-around best goto app, IMHO.

1 Like

This week I learned how to use Virt-manager (Virtual Machine Manager). There was a learning curve, to get the installation .iso attached to a new VM, and the virtual networking, but now that I’ve played with it for a while using a test VM, I’m quite impressed.

@esbeeb Were you using any other VMs prior? such as VirtualBox? I’m curious about the learning curve from a noobie view point vs someone who is crossing over.

I have past experience with Oracle Virtualbox, and VMware Workstation, and disliked both of them (they get too naggy over time with marketing, or upgrades to the host software breaks guest VMs). Last time I tried Gnome “Boxes”, it was really immature.

I noticed that Gnome Boxes has a flatpak now (perhaps has matured since I tried it about 2 years ago), but I didn’t want to waste about 1GB of SIM card to try it, when the virt-manager-related downloads were just a few hundred MB total.

I think gnome-boxes has matured a lot since then. I used it about a month ago just to check it out because I had a discussion about the program on another website, and after trying it out for a little bit, I don’t really use it religiously like I do virt-manager, but it has the benefit of being able to spin up a vm really quickly in comparison

1 Like

Yeah it’s not bad, I set my wifes Pop os install up with Windows 7 on gnome-boxes … it works really well , my wife is stoked she can use her old Photo editing software (Picture it).

This week I took a look at webmin 2.0, for remote administration of a Debian server with no desktop environment.

A non-technical user will probably hate it, as it has an overwhelming number of things you can change. There is no hiding of more advanced or rarely-used features within “Advanced” tabs, expanding options, etc. Like the most common things are not brought out to the forefront, simplifying what you see at first, while inviting you to dig deeper if you are a really geeky user. A good GUI shows only 3-7 choices at a time.

Having said this, I, for one, really appreciated it, because so many things I saw have far, far more ugly places you need to know about and change on the command line - the sorts of stuff that grizzled Unix admins only come to know about after many years of battle-hardened field experience.

Great examples:

  • the ability to graphically allow or deny lists of users for SSH login:

  • The Bell icon on the right informs you there are security updates to install, then you can do that graphically.

Edit: there is a system of marking various places as favorites (click the stars), so I was able to find the 7ish things I would probably actually use on a regular basis:

This serves to simplify things somewhat.

1 Like

Have you tried Cockpit? If so, how does it compare?

I took a quick look at cockpit (as packaged for Debian 11, ver 239-1), but decided to go back to Webmin.

Here are my thoughts, comparing the two:

Cockpit has better packaging in Debian 11: it was just an “apt install” away. (Webmin, in contrast, was a .deb downloaded from dodgy sourceforge servers - but there is an md5sum to verify authenticity).

Cockpit’s UX was like 10x cleaner and more consistent than webmin. Cockpit’s UX was a much more modern web app (webmin will hurtle you into the past by 10 or 20 years… forget using webmin from a smartphone). Redhat did a good job designing Cockpit, albeit Cockpit has far fewer features (for example, I can’t find any place to monitor /var/log/syslog), than webmain. Like not enough features to cover all the basics. Webmin’s too many features (or at least, poorly-organized features from a UX perspective) are preferable to me, rather than having not enough features.

While it was possible to install a pre-1.0-release file manager into Cockpit, it had nowhere the maturity and feature-completeness, that the bundled file manager had in Webmin.

Webmin’s bundled file manager alone made it worth using! It had tabs, bookmarks (to favorite folders), RESTful URLs, etc.

Although Cockpit’s killer feature was the ability to administer podman containers, I highly doubt that it can do all the admin and maintenance features that a real-world admin would actually end up needing to use, over the longer term. A real-world admin will likely have to drop down to the command line anyway, to do all the things which cockpit doesn’t accommodate - so it’s sort of useless to use Cockpit at all if it doesn’t cover everything you’d need. I’m unaware of even one person (of the fellow geeks I know) who manages podman containers long term (like say, for 3 years running, even doing security updates, backups/restores, etc), entirely from Cockpit (or from Cockpit whatsoever).

Anyone out there, who loves and uses podman from Cockpit, and never needs to drop down to the CLI?