• 3 Posts
  • 49 Comments
Joined 1 year ago
cake
Cake day: July 29th, 2023

help-circle
  • Yep, I think there’s sound arguments for separating out your storage (NAS) and network (router/DNS/PiHole) infrastructure. After that, whatever suits your purpose. I virtualise all my serious services on one machine under Proxmox (mostly for ease of snapshots) then have another machine for things I’m fiddling with, usually again under Proxmox so they are easy to move to production when I’m happy with them.


  • My NAS and production server run 24/7, I’ve got a dev server that I turn off if I’m not expecting to use it for a week or so. Usually when I do that, I immediately need it for something and I’m away from home. I have chosen equipment to try and minimize energy use to allow for constant running.

    My view on UPS is it’s a crucial part of getting your availability percentage up. As my home lab turned into crucial services I used to replace commercial cloud options, that became more important to me. Whether it is to you will depend on what you’re running and why.

    I’ve heard that one of the most likely times for hard drives to fail is on power up, and it also makes sense to me that the heating/cooling cycles would be bad for the magnetic coating, so my NAS is configured to keep them spinning, and it hasn’t been turned off since I last did a drive change.


  • I agree. Get a domain name, point it to the internal address of your NGINX Proxy manager (or other reverse proxy that manages certificates that you are used to). A bit of work initially, then trivial to add services afterwards.

    I didn’t really need encryption for my internal services (although I guess that’s good), but I kept getting papercuts with browser warnings, not being able to save passwords, and some services (eg container repository on Forgejo) just flat out refusing to trust a http connection.




  • I switched from Copilot to Codeium after only a couple of months of Copilot use - just based on the cost since currently I’m just a hobby coder.

    The main difference I’ve noticed is that Codeium doesn’t seem as smart about the local context as Copilot. Copilot would look at how I’m handling promises in a project, and stick to that, whereas Codeium would choose a strategy seemingly at random.

    A second, and maybe more telling example, is that I do my accounts using ‘plain text accounting’ in VS Code. This is a very niche approach to accounting software and I imagine is hardly in the training sets at all - there certainly would not be a lot of public domain text accounts in the particular format (BeanCount) I use in public code repositories. Codeium doesn’t make any suggestions for entries as I’m entering transactions, whereas Copilot would see that the account names I’m using are present in another file in the project and suggest them, and very quickly figure out the formatting of transactions and suggest them correctly.




  • I run two local physical servers, one production and one dev (and a third prod2 kept in case of a prod1 failure), and two remote production/backup servers all running Proxmox, and two VPSs. Most apps are dockerised inside LXC containers (on Proxmox) or just docker on Ubuntu (VPSs). Each of the three locations runs a Synology NAS in addition to the server.

    Backups run automatically, and I manually run apt updates on everything each weekend with a single ansible playbook. Every host runs a little golang program that exposes the memory and disk use percent as a JSON endpoint, and I use two instances of Uptime Kuma (one local, and one on fly.io) to monitor all of those with keywords.

    So -

    • weekly: 10 minutes to run the update playbook, and I usually ssh into the VPS’s, have a look at the Fail2Ban stats and reboot them if needed. I also look at each of the Proxmox GUIs to check the backs have been working as expected.
    • Monthly: stop the local prod machine and switch to the prod2 machine (from backups) for a few days. Probably 30 minutes each way, most of it waiting for backups.
    • From time to time (if I hear of a security update), but generally every three months: Look through my container versions and see if I want to update them. They’re on docker compose so the steps are just backup the LXC, docker down, pull, up - probs 5 minutes per container.
    • Yearly: consider if I need to do operating systems - eg to Proxmox 8, or a new Debian or Ubuntu LTS
    • Yearly: visit the remotes and have a proper check/clean up/updates

  • My ‘good reason’ is just that it’s super convenient - for backups and painlessly moving apps around between nodes with all their data.

    I would run plain LXCs if people nicely packaged up their web apps as LXC templates and made them available on LXCHub for me to run with lxc compose up, but they generally don’t.

    I guess another alternate future would be if Proxmox added docker container supervision to their web interface, but you’re still not going to have the self-contained neat snapshot system that includes the data.

    In theory you should be able to convert an OCI container layer by layer into an LXC, so I bet there’s projects out there that attempt this.





  • I routinely run my homelab services as a single Docker inside an LXC - they are quicker, and it makes backups and moving them around trivial. However, while you’re learning, a VM (with something conventional like Debian or Ubuntu) is probably advised - it’s a more common experience so you’ll get more helpful advice when you ask a question like this.



  • Yes, in a shallow tourist mine in Australia. Apparently coal starts to flake easily once it’s been exposed to air for a bit, so they kept a big chunk in a large jar of water that you could take out and handle. It felt like a light wet rock.

    The sample, and the coal at the workface of the mine was stereotypicaly black. We wore hats with lights on, and when we emerged back out to the daylight I had an overwhelming urge to speak in a Monty Python type Yorkshire accent and go home and have my back scrubbed clean of the coal dust by my swarthy tired looking wife while I sat in a tub in front of the fire in the kitchen and our urchins played in the street.

    I don’t want to give the impression I’m a big fossil fuel tourist, but I’ve also seen blobs of crude oil on beaches near Mediterranean sea oil terminals.

    Sadly, I didn’t try to set fire to them on either of these occasions, which I now regret.





  • Your workload (a NAS and a handful of services) is going to be a very familiar one to members of the community, so you should get some great answers.

    My (I guess slightly wacky) solution for this sort of workload has ended up being a single Docker container inside an LXC container for each service on Proxmox. Docker for ease of management with compose and separate LXCs for each service for ease of snapshots/backups.

    Obviously there’s some overhead, but it doesn’t seem to be significant.

    On the subject of clustering, I actually purchased three machines to do this, but have ended up abandoning that idea - I can move a service (or restore it from a snapshot to a different machine) in a couple of minutes which provides all the redundancy I need for a home service. Now I keep the three machines as a production server, a backup (that I swap over to for a week or so every month or two) and a development machine. The NAS is separate to these.

    I love Proxmox, but most times it get mentioned here people pop up to boost Incus/LXD so that’s something I’d like to investigate, but my skills (and Ansible playbooks) are currently built around Proxmox so I’ve got a bit on inertia.