Hi. Since yesterday i selfhosted all my stuff with a raspberry pi and two odroids. Everything works ok, but after i read about a few apps that are not supported by the arm-architecture of the SBCs and about the advantages of the backup-solution in proxmox, i bought a little server (6500T/8GB/250GB) to try proxmox.

Installed proxmox, but now - before i install my first VM - i have a few questions:

a) What Linux OS do i take? Ubuntu Server?

b) Should it be headless?

The server is in the cellar of my house, so would there be any advantages of installing an OS with a GUI?

  • ikidd@lemmy.world
    link
    fedilink
    English
    arrow-up
    20
    ·
    edit-2
    1 year ago

    Pretty much all my VMs are headless Debian for whatever purpose I’m using them. I’ve tried Ubuntu but it has done some weird shit with snaps over the years, things like installing Docker concurrently as a snap when I’ve already had it on as an apt package then shitting itself unpredictably until I figured out what was going on.

    If you can, use LXCs where appropriate to reduce overhead usage. An LXC container will use much less resources than a full VM. You can even set up Docker on a Debian LXC and I’ll set up a few hosts like that to partition my applications.

    There’s little reason to install a desktop environment for a server. Learn how to set up SSH keys and use the command line, most server applications don’t have GUI interfaces anyway unless they provide a webpage for admin, in which case you don’t need a DE anyway.

    If you do need remote access with a GUI, try installing a Guacamole webtop instance to remote into, and manage your services from that.

  • eros@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    ·
    1 year ago

    If you’re setting up Proxmox either use the Proxmox ISO or start with Debian Bookworm. The only Linux machines I have with a GUI are my desktop and my laptop, both running Debian with KDE. All my servers run Debian unless there’s a good reason not to.

    • moddy@feddit.deOP
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      1 year ago

      Yes, that is what i am used to.

      I guess headless is better for performance and i do not see an advantage at all.

      Another question: Why do you have several debians-vm’s? You also could take one, right?

      • towerful@programming.dev
        link
        fedilink
        English
        arrow-up
        6
        ·
        1 year ago

        I use multiple VMs, and group things either by security layer or by purpose.

        When organising by purpose, I have a VM for reverse proxies. Then I have a VM for middleware/services. Another VM (or multiple) for database(s). Another VM for backend/daemon type things.
        Most of them end up running docker, but still.

        Lets me tightly control access between layers of the application (if the reverse proxy gets pwnd, the damage is hopefully contained there. If they get through that, the only get to the middleware. Ideally the database is well protected. Of course, none of that really matters when there’s a bug in my middleware code!)

        Another way to do it is by purpose.
        Say you have a media server things, network management things, CCTV things, productivity apps etc.
        Grouping all the media server things in a VM means your DNS or whatever doesn’t die when you wiff an update to the media server. Or you don’t lose your CCTV when you somehow link it’s storage directory into the media server then accidentally delete it. If that makes sense.

        Another way might be by backup strategy.
        A database hopefully has point in time backup/recovery systems in place. Whereas a reverse proxy is just some config (hopefully stored on GitHub) and can easily be rebuilt from scratch.
        So you could also separate things by how “live” the data is, or how often something is backed up, or how often something gets reconfigured/tweaked/updated.

        I use VMs to section things out accordingly.
        Takes a few extra GB of storage/memory, has a minor performance impact. But it limits the amount of damage my dumb ass can do.

        • thirdBreakfast@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          I run one VM which some small docker containers go on, but whenever I’m trying something out it’s always in a Debian or Ubuntu VM - things just usually work easier. If it turns out to be a service I’m serious about running, then I’ll sometimes spend the time to set it up in it’s own LXC. Even a single Docker container.

          I much prefer each service in it’s own VM or LXC - for that same reason. Easier backups, easier to move to other nodes, easier to see the resources being used.

          @moddy with that processor and your 8GB you have plenty of room to play with multiple VM’s. Headless Ubuntu is probably the best place to start just because of the volume of results you get when googling issues. Enjoy.

          • moddy@feddit.deOP
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            Ok, i will have to check out what a LXC is before i start, but that helped a lot. Thanks

            • thirdBreakfast@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              It’s a bit like docker, in that it’s a sort of isolated system, but you use it more like a virtual machine (VM). It’s lighter than a VM because it uses the host kernel so you can run lots of them with out consuming too many resources.

              In the Proxmox web interface, up in the top right corner there’s a “Create CT” button. If you click through all that (once again I am recommending Ubuntu) you’ll have your first LXC container up in a couple of minutes - the quick creating is another advantage over VM’s. One of the joys of your excellent choice of Proxmox as a base is that you can easily experiment with such things.

      • grue@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        1 year ago

        Another question: Why do you have several debians-vm’s? You also could take one, right?

        As I wrote in my other reply, you typically want a separate VM for each service so that the OS configurations don’t conflict, and also so that you can shut down the VM for one service (e.g. for installing updates or migrating to another cluster node) without causing downtime to other services.

  • daFRAKKINpope@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    ·
    1 year ago
    1. Debian, unless I’m doing something specific like Home Assistant OS
    2. Yeah, usually. The GUI uses so much system resources just to sit there and be unused. That said I do have a Windows VM for Quicken that I remote into to manage my families finances. Of course that isn’t headless.
  • NeoNachtwaechter@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    1 year ago

    You mentioned selfhosting, so it’s safe to assume you want to install servers. Servers are headless by default.

    But the proxmox admin web interface also makes it easy to access a VM’s GUI remotely.

  • vividspecter@lemm.ee
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    1 year ago

    Do you actually need a VM for your use case? You might use docker containers or LXC instead.

    Normally I use VMs for situations where a container isn’t available (Windows, openwrt) or the VM is better supported (arguably home assistant).

    • Nilz@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      This indeed. To OP: if you use LXC containers using templates that Proxmox provides, they are headless by default. A Gui is a waste of resources.

      • grue@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        ·
        edit-2
        1 year ago

        No, containers are basically sandboxed applications+dependencies running on top of the host’s kernel. VMs run their own separate kernel. If anything, a container is less “wrapped” than a VM.

      • specimen@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        1 year ago

        Containers share the system’s resources with the OS; VMs take these resources for themselves.

      • Melmi@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        Docker containers are more like LXCs—in fact, early versions of Docker used LXC under the hood, but the project diverged over time and support for LXC was eventually dropped as they switched to their own container runtime.

      • calm.like.a.bomb@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        1 year ago

        Nope. Docker containers are kind of “virtual filesystems” and programs are running on top of the host’s kernel. They’re just isolated processes running on their own volume - to which you can also attach external “volumes”.

  • b) Should it be headless?

    As most people have said, typically a server is headless as it has less overhead. But it’s going to depend on your use-case and needs. If you have the spare ram/cpu/disk and want to put a GUI on every VM you can. In my case, most of my VMs are headless with a couple that have a GUI out of necessity.

  • grue@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    1 year ago

    a) What Linux OS do i take? Ubuntu Server?

    Typically folks either pick what they like best or pick what’s recommended by the service they’re trying to run. (Remember, typically you run one service per VM, so everything about the VM can be tailored to that service. That’s pretty much the whole point of virtualization – so that you don’t have to get multiple services cooperating on the same machine.)

    My default go-to would be Debian, but again, it’s really a matter of personal preference.

    b) Should it be headless?

    GUIs take up disk space, RAM and CPU cycles, so it’s more efficient not to have them (especially when you’re virtualizing and therefore running separate copies per VM). However, this is 2023, not 1993, so it’s not that big a deal.

    would there be any advantages of installing an OS with a GUI?

    The advantage would be that you could administer the VM and the service inside it using a GUI, if you’re into that sort of thing.


    In general, most services are designed to be administered over SSH or via a web interface, so a GUI shouldn’t be necessary. Also, in general you ought to be scripting the administration of your VMs themselves using e.g. Ansible, so a GUI shouldn’t be necessary for that, either.

  • lemming741@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    Lots of good advice here. Something I did was make a base Debian VM that has common tasks already done- my network configs, docker installed (but not enabled), the guest agent installed, ssh root password login (until I’m done setting it up). When I want to try something, I clone the base vm, change the IP, enable docker if it needs it. Set up my new services, copy ssh id, disable ssh password.

    Try to stick to the same disto and you give KSM a better chance of reducing memory usage.

  • Decronym@lemmy.decronym.xyzB
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 year ago

    Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

    Fewer Letters More Letters
    DNS Domain Name Service/System
    IP Internet Protocol
    LXC Linux Containers
    SSH Secure Shell for remote terminal access

    4 acronyms in this thread; the most compressed thread commented on today has 15 acronyms.

    [Thread #168 for this sub, first seen 27th Sep 2023, 10:35] [FAQ] [Full list] [Contact] [Source code]

  • PuppyOSAndCoffee@lemmy.ml
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    5
    ·
    1 year ago

    Usually it’s handy to have a display during initial setup and cfg. Also, with x windows port forwarding … you access your server gui over a network like god intended :)

    • towerful@programming.dev
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      In proxmox, especially if you are running a bunch of services (and not virtual desktops) it much better to set up an automated way of creating a cloud-init template.
      You can run the script every now and then to download an updated image, load up some sensible defaults, then create a template of the VM.
      After that, you just clone the template, resize drives, tweak hardware settings, adjust any cloud-init settings, then boot the VM.
      It takes a while to sort out the script, after which you get consistent up-to-date cloud-init enabled templates.
      Then it’s like 2 minutes to clone and configure a VM from proxmox’s web-gui.
      And you always get consistent ready-to-go VMs.

      You can even do it via CLI, so you could ansible/terraform the whole process

      • PuppyOSAndCoffee@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 year ago

        For sure.

        My point was more … first time, ever, you boot a raw device, a display can be handy unless you know what you are doing. Once it survives a reboot…

        After that, if you need a GUI — just run an x windows server on your main rig; interact with your remote server as the client without the need of a display.