I’m still running a 6th-generation Intel CPU (i5-6600k) on my media server, with 64GB of RAM and a Quadro P1000 for the rare 1080p transcoding needs. Windows 10 is still my OS from when it was a gaming PC and I want to switch to Linux. I’m a casual user on my personal machine, as well as with OpenWRT on my network hardware.

Here are the few features I need:

  • MergerFS with a RAID option for drive redundancy. I use multiple 12TB drives right now and have my media types separated between each. I’d like to have one pool that I can be flexible with space between each share.
  • Docker for *arr/media downloaders/RSS feed reader/various FOSS tools and gizmos.
  • I’d like to start working with Home Assistant. Installing with WSL hasn’t worked for me, so switching to Linux seems like the best option for this.

Guides like Perfect Media Server say that Proxmox is better than a traditional distro like Debian/Ubuntu, but I’m concerned about performance on my 6600k. Will LXCs and/or a VM for Docker push my CPU to its limits? Or should I do standard Debian or even OpenMediaVault?

I’m comfortable learning Proxmox and its intricacies, especially if I can move my Windows 10 install into a VM as a failsafe while building a storage pool with new drives.

  • Justin@lemmy.jlh.name
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    12
    ·
    2 days ago

    I’m not saying it’s bad software, but the times of manually configuring VMs and LXC containers with a GUI or Ansible are gone.

    All new build-outs are gitops and containerd-based containers now.

    For the legacy VM appliances, Proxmox works well, but there’s also Openshift virtualization aka kubevirt if you want take advantage of the Kubernetes ecosystem.

    If you need bare-metal, then usually that gets provisioned with something like packer/nixos-generators or cloud-init.

    • Matt The Horwood@lemmy.horwood.cloud
      link
      fedilink
      English
      arrow-up
      13
      ·
      2 days ago

      Yes, but no. There is still a lot of places using old fashioned VMs, my company is still building VMs from an AWS ami and running ansible to install all the stuff we need. Some places will move to containers and that’s great, but containers won’t solve every problem

      • Justin@lemmy.jlh.name
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        1 day ago

        Yes, it’s fine to still have VMs, but you shouldn’t be building out new applications and new environments on VMs or LXC.

        The only VMs I’ve seen in production at my customers recently are application test environments for applications that require kernel access. Those test environments are managed by software running in containers, and often even use something like Openshift Virtualization so that the entire VM runs inside a container.

        • Matt The Horwood@lemmy.horwood.cloud
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 day ago

          but you shouldn’t be building out new applications and new environments on VMs or LXC

          That’s a bold statement, VMs might be just fine for some.

          Use what ever is best for you, if thats containers great. If that’s a VM, sure. Just make sure you keep it secure.

        • catloaf@lemm.ee
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 day ago

          Some of us don’t build applications, we use them as built by other companies. If we’re really unlucky they refuse to support running on a VM.

          • Justin@lemmy.jlh.name
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 day ago

            Yeah, that’s fair. I have set up Openshift Virtualization for customers using 3rd party appliances. I’ve even worked on some projects where a 3rd party appliance is part of the original spec for the cluster, so installing Openshift Virtualization to run VMs is part of the day 1 installation of the Kubernetes cluster.

    • marauding_gibberish142@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 day ago

      Sometimes, VMs are simply the better solution.

      I run a semi-production DB cluster at work. We have 17 VMs running and it’s resilient (a different team handles VMWare and hardware)

    • lka1988@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      2 days ago

      Why would you install a GUI on a VM designated to run a Docker instance?

      You should take a serious look at what actual companies run. It’s typically nested VMs running k8s or similar. I run three nodes, with several VMs (each running Docker, or other services that require a VM) that I can migrate between nodes depending on my needs.

      For example: One of my nodes needed a fan replaced. I migrated the VM and LXC containers it hosted to another node, then pulled it from the cluster to do the job. The service saw minimal downtime, kids/wife didn’t complain at all, and I could test it to make sure it was functioning properly before reinstalling it into the cluster and migrating things back at a more convenient time.

      • Justin@lemmy.jlh.name
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 day ago

        I’m a DevOps/ Platform Engineering consultant, so I’ve worked with about a dozen different customers on all different sorts of environments.

        I have seen some of my customers use nested VMs, but that was because they were still using VMware or similar for all of their compute. My coworkers say they’re working on shutting down their VMware environments now.

        Otherwise, most of my customers are running Kubernetes directly on bare metal or directly on cloud instances. Typically the distributions they’re using are Openshift, AKS, or EKS.

        My homelab is all bare metal. If a node goes down, all the containers get restarted on a different node.

        My homelab is fully gitops, you can see all of my kubernetes manifests and nixos configs here:

        https://codeberg.org/jlh/h5b

    • Possibly linux@lemmy.zip
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      2 days ago

      You are going to what, install Kubernetes on every node?

      It is far easier and more flexible to use VMs and maybe some VM templates and Ansible.