So, what's your Linux week been like?

I have 64GB on my laptop and rarely even get to 25% of capacity on it. How would you utilize 12TB on a desktop, real question! :slight_smile:

Sorry for the confusion. I should have said 12 TB DRIVE. Not RAM. LOL. That would be a lot!

The reason I have so many large drives in this desktop is that it functions as a server in my “homelab.” One of those functions is a media server. Plus I tend to be a data hoarder.

1 Like

I gotchya, I have 5.5TB across 6 drives in a RAID 10 configuration. I only have 45 G available… So… I guess I need to do some work here. :grimacing:

LOL that’s thin on 5.5 TB. For storage, I have been running Western Digital Reds for the past 3 years or so. I caught a deal in December on a Red Plus 12 TB drive for $246. Hard to pass up! I run individual drives formatted to ext4.

I wish I would have enabled Zstd compression when I set the array up almost 4 years ago. I’m sure it’s time to start swapping out drives and growing the thing. The thing is, I don’t actually KNOW how to do it, as in, no experience. So, I need to start looking for drives and beginning the process. I think I have the space for 2 more drives but I would have to check again.

I was contacted by someone for whom I had maintained and managed hundreds of domains in the past.

Them: I just sent you a clip of code that you wrote for me. Can you tweak it to make it do xyz?

Me: (Opens the file and sees that it’s a zip of a cgi-bin folder with perl code that I wrote over 22 years ago). Maybe I could write something a little more up to date.

I love Openbox but of course, it’s not actively maintained. I started looking for a Wayland replacement. Playing around with Hikari WM on XWayland. I want to like it and it has some nice features but it’s a little buggy and the learning curve is steep. I’m going to see if I can get used to it for a couple of weeks but I’m already looking towards labwc and Waybox. This could all become pointless once I’m able to get Qtile working on Fedora 38.

1 Like

$ df -h | grep sda
response:
/dev/sda1 11T 11T 100G 100% /media/mark/plexmedia

Ughh. Looks I’ll now be buying 18 TB drives now. I remember when 12TB was overkill.

Haha, I’m at the same place myself. Media files will always expand to fill the available space.

And to think I was once happy with a few ZIP disks!

Ahh zip disks seemed enormous once.

This week has been awesome! I’m pursuing a GCP certification, but in the meantime I’m playing around with RHEL certifications and working a lot with kubernetes. I’m tempted to test RancherOS and I think I will put my hands on a Steam Deck to play around. Have a nice week people!

1 Like

Manage your Steam Deck with Kubernetes installed on RancherOS hosted by GCP.

I’ve been playing with converting part of my video library on my home server from H264 to H265.

I didn’t think it’d make much difference until after the first test: 3GB files (per episode) from one series came down to around 500MB each! I’d been seeing lots of reports of results like this around the 'net but was skeptical.

Quality is still great - picture is nice and sharp and clear and the sound is good. Basically can’t tell the difference at all. Both Kodi and Jellyfin play them fine.

I’ve now set Unmanic to run on selected folders each night and each morning I wake up and the server has more free space.

And to think I was about to start replacing the hard drives for bigger ones! Eventually, I will but not for awhile now…

2 Likes

Thanks for this tip. I have been doing some of this with ffmpeg and hardware acceleration – but still sticking with H264. I will be looking into your solution.

In the meantime, I had already started replacing 12tb drives with 18tb drives.

Well, actually, unmanic is a frontend to ffmpeg anyway. So you could do it yourself that way with the appropriate CLI parameters.

Unmanic in a docker container is also an easy way to schedule the process, since it hammers the CPU, I can just have it run while the household is sleeping and keep the server responsive during the day :wink:

2 Likes

This week in Linux I put Debian 12 on my second laptop. My first installed attempt failed. I re downloaded the installer and tried again and it worked beautiful. I appreciate the default gnome installation and fairly low ram usage. Compared to Fedora that is. Fedora will be staying on my main machine for a while yet.
Yeah really liked Debian. And will keep it for a while. I have never tried it before and this is way overdue. I feel like a grown-up Linux user now.

I upgraded my “Debian Server” which was flawless except for my mistake. I gave it permission to overwrite a new logind.conf file for systemd. The default choice was to leave the original file. I forgot that I edit this file to keep my “Debian Server” awake when I have the laptop lid closed. Thankfully, I fully documented in my personal wiki what changes I had to make to that file. A quick edit and a sudo service systemd-logind restart later and I was back in business.

I also enjoy Fedora on my desktops and servers too, but I like to keep my fingers in Debian with at least one machine, and a few desktops that run MX Linux.

My job for Friday, update the Virtualbox VMs on the HDD.

A screenshot from 5 used workspaces from left to right; Windows XP; Virtualbox Manager; Windows 10; Linux Mint and an empty workspace.
Win XP is used as jukebox and runs my music, while I update my ~20 older VMs, that are stored on a 2TB HDD. My newer and more frequently used VMs are stored on a 512GB nvme-SSD. I use the OpenZFS filesystem. The HDD performance is improved by the SSD cache (L2ARC 128GB) and the memory cache (L1ARC 4GB). Note that the storage (HDD & nvme-SSD) and both caches (L1ARC & L2ARC) are all lz4 compressed.

Windows 10 Pro and Linux Mint 21.1 are updated at the same time on workspace 3 and 4 and that is the maximum my 16GB Ryzen 3 2200G can handle; 3 working VMs. Everything is close to being the bottleneck, the HDD slows down the updates; the memory is almost full and the CPU load is often close to 100% :slight_smile:

Just for info, the boot time from cached HDD of Linux Mint is ~18 seconds and Windows 10 takes ~40 seconds.

My job for Saturday, update the Virtualbox VMs on the nvme-SSD and Unexplained Problems!

A screenshot from 5 used work spaces from left to right; Windows XP; Virtualbox Manager; Windows 11; Ubuntu 16.04 and an empty workspace.

Windows XP Home; Windows 11 Pro and Ubuntu 16.04 ESM are all updated. After the maintenance of XP, XP was used as jukebox again. I had to update my 6 main VMs the already mentioned VMs and Ubuntu Studio 20.04 LTS; Ubuntu 22.04 LTS and Xubuntu 22.04 LTS. Five of the other VMs are 3 flavors from the development branch of 13.10 plus Ubuntu encrypted using UEFI and Ubuntu based on good old MBR. The last VM is a prototype of the snap based immutable Ubuntu Core 24.04. I could not yet get the Guest Additions installed in the read-only OS :frowning:

After the updates I took snapshots of all datasets and started to backup the stuff. However I start getting problems with my first backup on the laptop, the transfers slow down for up to a minute to 400 to 4000 KB/s. The disk SMART read out is OK. The disk is new and the power-on time is still given in days.

I use a 1 GB/s connection and as expected the backup (ZFS send | ssh receive) starts at ~90MB/s, but it slows down to say a few MB/s and even KB/s. During those slow periods the laptop (2C4T) run at 100% load on a few CPU threads for many seconds. Occasionally it recovers again and starts to run for a few seconds at 45MB/s. The HDD temps were high ~51°C according to hddtemp, but the operational temp could be to up 60°C according to Seagate. If I stop the sender the HDD is still processing buffered data for many seconds (>10).

Nothing goes wrong, if I keep it running, it will complete at some moment in say 2 hours. The sending side is either the nvme-SSD or a much faster 3.5" HDD, say 192MB/s vs. 108MB/s. Strangely I have that issue, since they changed the buffering in the send and receive of OpenZFS. Since that change they bypassed the L1ARC memory cache with its huge spare buffer capacity and they did write their own local buffering optimized on CPU load for servers.

I’m not happy with the disk location in the HP laptop, it is sandwiched between two other plastic surfaces with no spare room and only a small ventilation grill at the side, where really hot air is coming out. The current local temperature is ~36°C. To exclude temp issues, I try it again tonight with temps at least below 25°C lower.

I’ve tried it at night at 26°C and the disk temps were in the forties. I also put the CPU governor at “performance”, but all that did not really help. The whole backup took ~3 hours. The backup was big because of:

  • The many Linux kernel upgrades;
  • The relative large Windows upgrades;
  • I zeroed out the not used space in all VMs, still used frequently or receiving updates.

I intended to do it once per month, but after a month it only saved between 25 & 30GB on say 1,5TB (0.2%), so I intend to do it less frequently, say once per quarter.

My disk fragmentation is between 17% and 27%, so I expect that with one disk relative frequent head-moves are needed in a datapool with ~30% free space, free space that is fragmented too.

ASSUMPTIONS:
It buffers a relative large number of records and that is why it still writes records for ~10 seconds after I kill the send operation. Occasionally it will need to reorganize those memory buffers and that causes the 100% CPU-thread loads. During that time it will write less records and can receive less data from the network. I assume the system is optimized for servers with many disks in Raid configurations and not for a single laptop disk. We are suffering partly from the low random write throughput of the disk as measured by e.g Chrystal Disk Mark.

In the past using the L1ARC, there was much more buffer space and typically once per 5 seconds those writes were written out sorted on disk address to minimize seek delays.

On my 2nd backup server based on a Pentium 4 HT (FreeBSD 13.2) I reach a constant 29MB/s limited by a 95% load on one CPU thread, but there they write to 3 disks in parallel (Raid-0), neutralizing disk head movements by writing to other disks in parallel.