What do you host at home?

There are a lot of things one can keep at home! Tell us what you have, and maybe why you have it there? What is specific with your setup? Did you have a hard time setting it up? Do you have a back-log of things you want to setup at home but haven’t had time for?

I bet I’m not the only one hungry for details, so don’t be sparse :slight_smile:


Here goes!

My file server is currently only a 1TB disk shared on the local network. It’s contained on it’s own VM. I also have rtorrent installed on it for seeding Linux dists!

I used to use Syncthing but after hearing so much praise from Noah about Seafile I was intrigued to try it out. I’m using Seafile mostly to sync org files between devices. Also to upload pictures from my phone to the file server.

Funkwhale is my replacement for Google Play Music! I keep my music collection on the file server but I use funkwhale to serve the collection through a website. It also supports subsonic so that I can stream to my mobile phone with the app D-Sub.

Those two are the big ones that I use daily. But I also have an internal DNS through BIND, a PostgreSQL instance for the software needing a database, pgadmin 4 (which is really nice actually), apache/nginx, and a Teamspeak 3 server that mostly collects dust.

I haven’t had time to work on anything for quite some time, but I hope to clear up some time soon so that I can start cleaning up things a bit. I also want to setup Prometheus with Grafana for graphs and monitoring.

Some things I have on my maybe list is a Matrix instance as well as calibre-web for a virtual library reachable from anywhere.


I almost hate to respond since @MichaelTunnell is going to give me hell for not finishing the network diagram & write-up I promised… oh… maybe a year ago now. Oh well, here goes…

I run a FreeNAS server on an old Dell R510 with a Dell MD1200 (or MD1000 I can’t remember) DAS with ~160Tb of raw storage. I also have a Dell R710 running XCP-NG as my hypervisor with about a dozen VMs (changes regularly) for tinkering & such. I’m also about to re-purpose an old desktop as a 2nd FreeNAS box in my office (other is in the rack in the basement) for an additional on-site copy plus faster access to files while in my office. The head of the network is a home-built pfSense box in a 1U case in the rack.

I have 10Gb peer to peer between the R510 & R710 with the VM store on FreeNAS mounted via iSCSI to the Hypervisor so all of the VMs actually boot & run off of the FreeNAS share. Since it’s 10Gb, you can’t tell it’s not local storage.

As for software, the VMs are a mish-mash depending on my mood (and boredom). Usually though, it’s a derivative of Debian. So Debian, Ubuntu, Kubuntu, etc… However, I have been known to play with Red Hat or CentOS on a regular basis. As for what the VMs do, you name it. Nextcloud, Plex, Mosquitto (MQTT Broker), MediaWiki, Ubiquity controller, etc… The big thing is Nextcloud. I’ve given up GDrive & the like for Nextcloud. It’s super fantastic & getting better every day. Highly recommend.

For super critical data, I encrypt that & toss it up to a Backblaze bucket for offsite storage.

I also have an UdooX86 SBC with a few docker images on it as well.

TBH, other than Nextcloud, pretty much everything I do is out of sheer boredom & not getting to do this sorta stuff at work anymore (I’m management, ugh). Maybe one day I’ll get that diagram & walk-through written…but don’t get your hopes up. :slight_smile:


What I currently host at home, but that I don’t let leak out onto the internet is my FreedomBox, running on a original Raspberry Pi 3.

The FreedomBox is essentially a Debian distribution meant to make it easy to setup various services that you can also deploy onto the internet if you wish, or just on the LAN.

You can find the FreedomBox here: https://freedombox.org/

The services I am currently running on it are Syncthing and Quassel IRC. I would like to add a Mediawiki for my own use, Radicale to sync my calender, Roundcube for email and TinyTinyRSS for the RSS feeds I still peruse. I’d also like to get to understanding Tahoe-LAFS for NAS stuff, since apparently FreedomBox has support for that as well.

1 Like

Ahh, that seems quite nice! I hadn’t heard of FreedomBox before. Seems like they’ve been around for awhile as well.

Haha, I know how it feels. I have a complete network diagram actually, only problem is that it was last updated a year ago. :sweat_smile:

That’s quite the setup though! On top of my to-do list is to setup a off-site backup on backblaze, and then to document the rest of the stuff. I’m a bit afraid of adding new stuff until the last thing added is fully documented. I had some issues getting pgadmin4 running, as well as funkwhale… And if any of them had to be reinstalled today, I’m sure I’d have to reinvent whatever solution I found back then…

I were looking at Nextcloud before I decided upon Seafile. The reasoning being that I didn’t need everything Nextcloud offered, but I am keeping an eye on the development. If I ever wanted a self-hosted office suite, I’d go with Nextcloud for sure. :slight_smile:

I’m lucky enough to have received old servers from work, which are an HP G6 380 and G5 360, setup in a home lab of OpenStack + OpenShift, as well as my stable server (built myself) featuring 2x Xeon E5670, 128GB memory, and about 20TB of disk.

My services are all in containers, built on a CentOS 7 host that I mostly administer over cockpit instead of ssh. Have a gateway on DO that gives me controlled access from anywhere without need of a VPN for my mail, emby, build service, and also runs a OpenVPN instance (soon to be wireguard) into my Openstack lab.

Have 2 raspberry pi’s as well, one as a backup DNS, one as a local backup host with a pair of 8TB seagate external drives to do backups of my important stuff.

All my servers are managed in Saltstack, and I recently did a complete (non-virtualized) test of it to restore my system to full working order when needed, which MOSTLY worked. Ended up getting all the infra back up in about 3 hours (instead of 3 days initially).

Damn I love Linux and FOSS!


I only have a backup server a 2003 Pentium 4 HT (3.0 Ghz), 1.25 GB DDR, 3 IDE HDDs in total 0.89 TB running FreeBSD-12 + XFCE + Conky and ZFS with all disks striped. Two cables; power and 1 Gbps Ethernet. The PC is controlled by “SSH -X” or by RDP from a Windows 7 VM on the Ryzen desktop. The system is powered on for approx 1 hour/week for the backups.

On the same network is my Ryzen 3 2200G with 16 GB DDR4, 1 SSD (128 GB) and 3 HDDs (2.5" 320 GB; 3.5" 500 GB and 1 TB) in total 1.82 TB booting and running Ubuntu 19.04 with ZFS with all HDDs striped and the SSD used both as boot device and SSD cache. My data is stored with copies=2, so I have a kind of raid-1 for one of the datasets/folders.

I also have a Dec 2011 HP Elitebook 8460p with an i5-2520, 8 GB DDR3 and 1 TB SSHD, running Ubuntu Mate 19.10 booting and running ZFS.

I backup my Ryzen data to both laptop and backup-server (rsync), but I want to change that to ZFS send/receive, because it is considerably faster. The Virtual Machines are backed up by “send/receive” from 64-bits Ubuntu to 32-bits FreeBSD. The laptop has the same VMs, but with different settings in Virtualbox (.vbox) and Conky (.conkyrc), so they are kept up-to-date locally after the initial copy.

26 Oct: All back-ups use ZFS send/receive now and that is considerably faster. The incremental backup to the laptop took a few seconds running at ~90 MB/s. Copying data from desktop to laptop is faster than copying locally on the 1 TB laptop SSHD. The incremental backup to the backup server runs at ~20 MB/s with one of the Pentium hyper-threads at a 95% load.

This is what I currently have running from my home:

  • Discourse test forum
  • MineOS
  • NextCloud
  • nginx reverse proxy
  • Quassel
  • Unifi controller
  • Wireguard

This is on a 4C8T machine running xcp-ng and using Xen Orchestra to manage it. I also am running a FreeNAS box for all my network storage/backup needs. My freenas has a single jail that runs:

  • qBittorrent

This is what I am on my to do list, or have been tinkering with lately:

  • Bitwarden (Ended up using their cloud hosting)
  • A VM for learning Docker
  • One hour one life server that I spin up when i have people who want to play but don’t have a paid licence for the game
  • Windows Server/7/8/10
  • Zabbix monitoring
1 Like

Although I do not own real server hardware, I do play with multiple VMs on my desktop. I currently run a wireguard vpn, pihole, notrack, and a nfs server for my digital ocean vps(on which I just host mediawiki and some music). Besides my desktop host, all of the systems run debian 10. I manage the qemu/kvm VMs with virsh, virt-manager, and other lovely tools.

I have played with nextcloud in the past, but my curiosity moved on to other stuff.

I have been wanting to do some (virtual) distro hopping, but it’s much more time and resource consuming to set up a VM with a desktop environment than cloning a 4gb cli vm. And I happen to find other things to do on days off.

Do all of you do sysadmin work for a living? These setups are great! I have a Nextcloud I never use and Emby for TV/movies on a old desktop running Ubuntu LTS. I am going to watch DasGeek’s PiHole video and setup that as well, not so much for ad-blocking as for Google-blocking.

I’ve been a sysadmin/developer for over 20 years. Currently a DevOps admin for a larger company, after working for myself for 10 years.

It’s all pretty simple when you learn the basics. I’d suggest doing some of the free courses on Linux academy.


I have a PineA64 Board with Docker. I run a smoke ping container and a Lounge IRC container. That’s about it. I did try nextcloud but it never really stuck. Maybe if I had a faster box. The Lounge IRC is really amazing though, if any one is looking for a IRC client/bouncer.

1 Like

I host a publically-accessible (after an invite is granted) Mattermost team server on a (locally-hosted) Raspberry Pi 4. This Pi 4 runs Ubuntu 20.04 64bit. Using a combination of Wireguard and HAproxy (on a VPS server, allowing a public connection), I “punch” the firewall which I’m behind (the Internet access here is double-Natted, so port-forwarding is a very ugly prospect for me). I use a wildcard Let’s Encrypt SSL cert in front of Mattermost, on the Pi (Nginx is a proxy in front of Mattermost locally on the Pi). These SSL cert files are hand-installed, and new certs get generated on a VPS, based on a manual DNS-based challenge.

Yes, that’s proxied twice. This works, because HAproxy on my VPS merely forwards TCP packets, leaving the HTTPS traffic inside the packets unmolested. Nginx on the Pi actually works with the contents of the packets, encrypting and decrypting the SSL. My VPS sees nothing but SSL-encrypted Mattermost traffic.

This nifty networking trick I use where I first establish a Wireguard connection to my VPS server (using the “keepalive” option, on the Pi4), then use HAproxy to send TCP-forwarded packets down the Wireguard tunnel (thereby punching the firewall, and not needing port forwarding), is a trick which I have dubbed the “Subzero” firewall puncher.

Let me explain this “Subzero” analogy (the combined use of Wireguard and HAproxy) a little more. It’s sort of like in the original Mortal Kombat video game back in the day, where my favorite character Subzero would throw a hook thing on a chain at his opponent which would stick in their neck, then he would say “Come here” and would pull them close for an uppercut. That’s sort of like the periodic Wireguard keepalives holding a connection to the VPS (the “chain”), and then HAproxy is like the “hook thing”, hooking the traffic down the Wireguard connection to the Pi 4. The “uppercut” is a reference to the firewall being punched (and it’s not any end-user who gets “punched”). :slightly_smiling_face:

I apologize that the “Subzero” analogy involves violence. My server does nothing which is of a bad or dark nature BTW. It’s used for a totally legitimate, above-the-board purpose. My motivation for posting this method is to prevent it from getting patented somehow in the future. I hereby release this method to the public (and may the ultra-rich hi-tech tycoons like Jeff Bezos not take over the world!)

Perhaps a few others have already figured out and used my “Subzero” trick (I followed no other comprehensive guide to do this all as one coherent system/solution), but I think I’m the first to put a name to it. This trick is especially ideal for me, because my heartless ISP gets to remotely update the firmware for the dodgy local wifi routers here all they want, and they won’t ruin my setup (by erasing, without warning, all port-forwarding rules, which is a huge annoyance I’ve had happen to me multiple times before, wrt other self-hosted services).


I have a pie-hole ( on a rasPI ), OPNsense on a 4-NIC NUC, Ansible in a VM, Nessus and OpenVAS in docker. Docker and the VM’s ( KVM ) run on a CentOS 7 powered by Ryzen.

My home server setup is pretty basic these days.

I have a RockPro64 with a 2 TB USB3 external drive attached, which runs these in docker containers:

  • Jellyfin
  • Homeassistant

I have a one node Ovirt cluster and a ZFS storage node.

ZNC, Vyos, Emby, Jellyfin, and a few future projects.

UPDATE from Oct 2019
I have a $20 backup server a 2003 Pentium 4 HT (3.0 Ghz), 1.25 GB DDR (400 MHz), 4 HDDs in total 1.2 TB running FreeBSD-12.1 on ZFS with XFCE; XRDP and Conky. The leftover HDDs are: 2 x 3.5" IDE (250 + 320GB) and 2 x 2.5" SATA-1 (2 x 320GB). The system has two external cables; Power and 1 Gbps Ethernet. The PC is controlled by by Remmina from my Ryzen desktop. The system is in use for more than a year and is powered on for less than 1 hour/week for the backups. The $20 is for a new iTech 600W power-supply and a 3rd-hand Compaq Evo Tower with two stickers:

I backup my Ryzen desktop to both laptop and backup-server. All back-ups use ZFS “send | ssh receive”. The incremental backup to the backup server runs at ~200 Mbps instead of 1 Gbps with one of the Pentium CPU-threads at a 95% load. The load is caused by the network process and not by ZFS. Taking snapshots is less than a second and afterwards those last snapshots are sent to the backup server. Because of the snapshots the desktop can be used normally during the backup, so I don’t care whether it takes 1 or 60 minutes. Only the modified records are sent and they are sent compressed, because on both sides they are stored lz4 compressed (compression-ratio 1.8).

This backup is a miracle of the compatibility of modern software, because it is:

  • from 2019 AMD Ryzen to a 2003 Intel Pentium
  • 64-bits to 32-bits
  • Ubuntu 20.04 LTS to FreeBSD 12.1
  • Linux to Unix/BSD
  • nvme-SSD to IDE-HDD
1 Like

I have a dell optiplex laying around.
Following services are hosted on it.

  • Bitwarden (Password Manger )
  • Nextcloud
  • Syncthing
  • qBittorrent
  • Guacamole ( VNC, SSH, RDP )
  • Codeserver
  • Searx ( Meta search engine )
  • Adguard Home ( DNS adblocker / Pihole)
  • Mariadb
  • Adminer ( To manage db via webui )
  • Authelia ( SSO with 2FA )
  • Heimdall ( Dashboard to launch services from one location )
  • Traefik ( Reverse Proxy )
  • Jellyfin ( Media Server )

All of these are running in containers with docker, i have a dynamic dns setup with dnyu.com and personal domain, secured with ssl from Letsencrypt and all services are behing sso with 2FA.

Anyone running FreeIPA or OpenLDAP? I’m curious to hear your thoughts.