There is an overarching belief that most things we want to do on the internet can only be accomplished by signing up for accounts with big technology companies. While it is true that a quick and dirty way to write and share documents may be through Google Docs or tracking your music listens through Last.FM, there is an often-overlooked way available out there too: running it yourself.
As us hobbyists look for alternatives to regain control of our data and privacy, the idea of running a Home Lab becomes more appealing. Instead of creating that Google Doc, you create a fully-sharable document in Nextcloud. Video chat with Zoom can be replaced by Jitsi. These are a small handful of services that run in a Home Lab and give you access to the things you need. And it’s freeing to run them with complete control over what is private, how it’s shared, and the ability to pull the plug if you don’t like it.
I’ll spoil the story right here: a Home Lab is just a spare computer in your home that’s running all the time. Some are connected to the internet and some are not. It’s a machine used to run whatever services you want on it. While I’m going to dive into the meat of this as we go, those words you just read always hold true: A Home Lab is simply a spare computer on your home network. Nothing more.
A Home Lab could be an old laptop you don’t use anymore but is still sitting on a shelf. Perhaps it’s that dusty old beige PC tower sitting in your basement. Or a cheap computer you found at a yard sale. Either way, the added benefit of a Home Lab is the fact that you’re repurposing old computers. Instead of it becoming more e-waste and destroying more of the planet, taking an old computer and putting it into service can be a big win.
For me, the advantage is moving away from Big Tech for ordinary things I want to do. For example: I want to store scanned documents someplace secure. I could place them online and that locks me into Apple or Microsoft or whomever’s ecosystem. But I could also fire up a service such as Paperless NGX and store it on my Home Lab. Those scanned documents live in a folder in a place I designate, but I now I have access to it from anywhere in the world, and I know exactly how it’s being controlled.
Another example is music. I’ve been building my music library since high school when MP3s began gaining traction in the late 90s. I’ve lived through Napster, Kazaa, Lime Wire, and the iTunes Music Store. Combine all that with digital copies of music I have from buying vinyl editions of albums, and well… I have a lot of music I own. My music, again just files in folders, is on my Home Lab. I run Plex Media Server on that computer, point that server software to those music folders, and I now have all my music available on the go via their Plex Amp application.
These are only two examples, but they declare what I call “Digital Freedom” where I can choose what I want to use and change at any time. There’s no lock-in with Plex Amp because my music files live on my server, nobody else’s. If I don’t like a change Plex makes, I can change software at any time. It’s the same way that people get tired of Hotmail or AOL email and decide to move to another service. You take your email with you by migrating and you’re set up elsewhere. This is what Digital Freedom is about. By “rolling it myself,” I give myself choice and freedom at all times.
A final example is Mastodon. When I left Twitter after you-know-who bought it, I started on Mastodon.social. After a few months, I had two issues with this decision:
- I was on the largest Mastodon instance and contributing to a lack of diversification of servers. Because “dot social” is so big, if it went down, it would take a significant piece of the Fediverse down with it. Because Mastodon servers talk to one another, there is no need to have to be on Mastodon.social.
- I had zero control over moderation, federation/defederation, or other decisions. So if mastodon.social decided to defederate from an instance where I had friends, I’d be cut off with no recourse.
With that in mind, I got to figuring out how to run my own Mastodon instance. It took a bit of work but two years later I’ve managed to keep it running and available 98% of the time. I used Mastodon’s migration feature and never lost anyone I follow. Now, when an instance is showing signs of hosting bad actors, I can make my own decisions and know it doesn’t affect anyone besides me. Because it’s on my own Home Lab, it doesn’t cost me anything extra. That’s another perk because now I’m not restricted by high hosting costs that instance admins have to pay each month; begging for donations. For me, Mastodon is no different than my music or scanned documents: I’m only bound by the free space on my drives and the specs of my machine.
The reasons to get a Home Lab going isn’t limited to social media or scanned documents. There are tens of thousands of projects on Github or Codeberg and many offer self-hosted options. Before signing up for a service, giving a self-hosted alternative a test drive can be beneficial and doing it on a machine you control adds privacy.
“Okay, I have a spare computer. Now what?” you may be asking. This is where forks in the road begin, mainly because even an older computer needs to have up-to-date software on it. This is to address security issues and make sure services you want to run are able to get going.
Firstly, it’s important to make sure any important files on these spare machines are saved somewhere else. Formatting or erasing large swaths of the hard drives is not uncommon.
Second is a universal caveat: if you are exposing a computer to the internet, there are inherent security risks. While I won’t get into specifics here, know that once you expose a computer, there are minor but important steps needed to secure it. For further reading look into buying your own domain, Cloudflare tunnels/proxying, using a reverse-proxy with Caddy/getting certificates from Let’s Encrypt, and limiting the ports you open on your machine.
Finally, in order to use any of these services outside your home, your ISP has to allow you to open Port 80 on your modem or gateway, and you must configure your router to forward that traffic to your Home Lab. For now, we’re only focusing on using a Home Lab locally on your network at home.
If you’re using a Mac, it’s as simple as running System Update and getting the latest of everything available. Upgrading the operating system (OS) is recommended, to whatever version is the latest Apple supports for that hardware. Even if you can’t get the latest edition of macOS, Apple continues to ship security patches of older versions for many years.
For Windows users, you’d want to get at least onto Windows 10 if you can’t get onto Windows 11. The alternative is to wipe the machine and go the Linux route. The advantage of switching to Linux is that more services run natively on Linux and generally Linux runs faster and more secure on older hardware when compared to Windows. There are tons of versions of Linux, but the more popular ones are Linux Mint and Ubuntu. You’ll need to do some homework for what may be best for what hardware you’re working with, but those two are good places to start.
While spare Linux machines sitting around is uncommon, if that’s what you’re rolling forward with, simply make sure it’s fully updated.
With that done and your machine up and running, we’re going to focus on some key tools and applications that make up many Home Labs in the fastest way to get up and running.
Docker
When it comes to “spinning up” a service on a Home Lab, there is no faster way than Docker. It can take some time to wrap around just what Docker is, but I’ll do my best to explain. Docker is an application that creates an individual virtual machine for every service (called a "container") you want to run. Each container is walled off from the others. This means your services don’t conflict and it ensures things are tidy. Think of Docker as one of those huge container ships and each service you run is one of those big metal containers stacked up on that ship. Docker is the ship; each service is a container. Docker supports as many containers running simultaneously as your hardware supports. Because Docker is the “ship,” it’s simple to shut everything off by simply quitting Docker. You can also tweak Docker’s settings to only use X amount of memory or Y amount of your CPUs for all containers. You steer the ship. Docker is completely free and you can grab it from Docker.com.
Text Editor
In order to get a container running in Docker, we’re going to need to create text files that instruct it how to create or compose the container. The advantage of Docker is these Compose files are written in plain English and formatted in YML. A Docker Compose file tells Docker simple things like where to download the service from, what to name it, where to store files, what ports to open, etc. You can endlessly edit a Compose file, rebuild the container, check your results, and make changes further. Many, many services require almost no configuration when starting out. This means you can get a container running with all defaults, see if it suits your needs, and refine the setup if it does.
So, we’re going to need a text editor. Any text editor will do, but some are better than others. On the Mac BBEdit is quite good along with Sublime Text and VS Code. For those on Windows there’s Notepad++ and both Sublime Text or VS Code will suffice.
Terminal
Known as the Command Prompt in Windows, the terminal is where a lot of the action takes place. Yes, using remote software to see your Home Lab is great, knowing your way around the command line simply makes life easier. You can find all the commands across the internet and in your system’s help file, but I’m going to list some common ones I use all the time.
- LS – list the contents of the directory you’re currently in
- CD <folder name> - change directory to the named folder
- CD .. – go up one folder in the directory
- CD \ - go to the root (highest) directory
- mkdir <folder name> - create a folder with the name you specified
- docker compose pull – download (pull) the latest version of the docker image you want to turn into a container
- docker compose up -d – create a docker container using the compose file in that folder and start it. The “-d” means it’s disconnected from the terminal and will continue running even if you quit the terminal application.
- docker compose down – shut down the single container from the directory you’re currently in.
- docker ps – list all running containers. This will give you every container name and its ID number. This can be critical if troubleshooting is needed.
I’m going to run you through a quick example of running a simple service in Docker and accessing it within your local network.
There is a fantastic service called Multi-Scrobbler. This software allows you to log scrobbles (songs you listen to) from any source and write them to multiple destinations. I currently scrobble my music to Last FM but I want to also log them to ListenBrainz because that service isn’t owned by a big company. Additionally, I want to log those same listens to my own self-hosted scrobbling system.
To get started we’re going to create a new Compose file in our text editor. The contents will look something like this:
services:
multi-scrobbler:
image: foxxmd/multi-scrobbler
container_name: multi-scrobbler
environment:
- TZ=America/New_York
- BASE_URL=http://MyPCName.local:9078
# all Environmental Variables in below examples go here!
# - DEBUG_MODE=true
volumes:
- ./config:/config
ports:
- 9078:9078
restart: unless-stopped
If all you did was place that into a text file, saved it, and ran “docker compose up -d” in the directory where the file was saved, you’d be off to the races. Before we do that, let’s go over the file because as you can see: it’s easily readable by a human.
In the compose file we’re telling Docker what image to download: “foxxmd/multi-scrobbler”, what I want to name it (you can call it anything you want), my time zone, where to store its files as defined in the “volumes” section, which port on the computer maps to the port inside the container (9078), and telling Docker to restart the container automatically unless I intervene and stop it manually.
I recommend creating a main folder for anything you want to run in Docker. Call it anything you want. Then inside that directory, create a folder for each service you run. This keeps all files isolated from one another. In my case, I have a folder called "multi-scrobbler."
Docker compose files are all in a YAML format where indentation matters and you can ensure your file has the correct formatting by using something like yamllint.com. All Docker compose files are saved with the same name: “docker compose.yml.” As long as the formatting is correct, you would navigate to your “multi-scrobbler” folder and save it with the aforementioned file name.
With that done, open Terminal, navigate to the “multi-scrobbler” folder, and execute the “docker compose up -d” command. You’re done. If you open a web browser and type in your Home Lab’s IP address or computer name.local and add “:9078” to the end, you should see the multi-scrobbler service starting up. There would be more configurations we would need to go through to get it fully operational, but 80% of the work is already done. We have a fully-running service operating on our Home Lab ready to do its job.
Docker is quite powerful, again, because it acts like a platform for all these containers. If we find out the developer of Multi-Scrobbler has updated it, two Docker commands is all we need to type to update it.
In that same folder in Terminal we would execute “docker compose pull” and the updated version will download. Once it’s done, typing “docker compose up -d” will recreate the container and restart it as the new version. Two steps and it’s done. Because we’re saving all our configurations and data in the folder we defined in “volumes”, we won’t lose any data.
Want to run another service? It’s as simple as creating a new folder that’s next to “multi-scrobbler”, creating a Compose YML file, and running the commands in terminal. As your desires to run new services grows, so will your stack of containers. Each one will chug along doing its own thing.
If you’re looking for an even easier, Docker-free way to get started, you can give music a try. Grab some of your mp3 music, rip a CD, or download some royalty free or public domain music from the Internet Archive. Drop them into a folder. You can then install Plex Media Server and create a music library by configuring it to use that music folder. Now you can listen to that music from any device with the Plex Amp application installed. As long as you know how to navigate your file system, the time from getting started to playing your tunes through the app on your phone would take under less than 30-minutes. And it’s yours. Your music playing through your system completely private and under your control. No algorithms or data harvesting in sight.
This is again the reason why Digital Freedom is gaining importance here and now. Lock-in is prevalent everywhere. AI is coming to scoop up every ounce of data it can find. The privacy of our information is at risk every day whether for profit or mining or by breaches. A Home Lab is a way to take a stand against established companies looking out for its bottom line. To them we are simply an “active user” or “subscription revenue.” To me, you, us, and everyone who values privacy, a Home Lab is a flag planted on the internet declaring control, declaring the space we are entitled to, and declaring that other choices exist.
After running my Home Lab for over five years, I can reliably say Digital Freedom feels damn good.
Aaron Crocco can be summed up as a writer, but complicating this further is the fact that he's a huge geek, Apple fanboy, and loves Back to the Future so much that he owns a Delorean. In the mid-2000's, his writing spark was reignited and he's been tearing up the keyboard ever since. Aaron's 2015 debut sci-fi thriller novel SPIRIT HACKERS explores the intersection of technology and the afterlife. His Twilight Zone-esque story CHRONO VIRUS rocked readers in 2012 with its original take on space travel. In 2013, he released the tie-in story CHRONO VIRUS: FALL OF THE HORIZON. Aaron's apocalyptic debut series AS DARKNESS ENDS explores the end of the world from many points of view. In 2021 Aaron shifted into something new with TimeMachiner, a newsletter focused on technology, culture, and nostalgia. He hails from Long Island, NY, and enjoys hockey, rock & roll, coffee, and way too much ice cream.
Member comments