What other approaches do folks use to deterministically customize Linux?

  • onlinepersona@programming.dev
    link
    fedilink
    arrow-up
    17
    ·
    edit-2
    19 hours ago

    NixOS could be the future if it had a better community, documentation, and a user interface to manage the system. Right now, it’s still completely unusable for even tech literate folk. In fact it’s unusable for people without time.

    If NixOS is to become the future, it has to become more user friendly. Not only as a system but as a community. A community that ridicules people asking questions or responds with “just read the source code” might as well just continue believing in “self-documenting” code.

    And let’s not even dive into the close-source source forge dependency they have.

    Anti Commercial-AI license

  • Auth@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    ·
    1 day ago

    I think ublue will win over nix in the longrun. Its layered approach to system design seems far more sane. Being able to take a base image and apply gaming patches over it then apply your own personal layer is such a great way for multiple people to build off the same base and not have to resign the wheel everytime.

    NixOS is more fun for the user from a tinker point of view but Ublue is better for the distributors and the non tinker end users.

    • ruffsl@programming.devOP
      link
      fedilink
      English
      arrow-up
      6
      ·
      22 hours ago

      Oh, neat! Is this the project you’re referring to?

      Looks like Bazzite is listed as an example derivative image. I’ve heard good things about that OS from newer Linux users’ perspectives. But is ublue something an individual user could personally customize, or more like something a development team or community project would build up from?

      The landing page referencing layers and the Open Container Initiative, so is this more like a bootable container using overlay file system drivers?

      One attraction I appreciate with Nix is the ability to overlay or overload default software options from base packages, without having to repeat/redefine everything else upstream, e.g. enabling Nvidia support for btop to visualize GPU utilization via a simple cuda flag. Replicating lazy-level-evaluation with something buildkit ARGs would be hectic, so do they have their own Dockerfile/Containerfile DSL?

    • HelloRoot@lemy.lol
      link
      fedilink
      English
      arrow-up
      13
      ·
      edit-2
      1 day ago

      and also only once you’ve invested the multiple weekends of migrating your whole setup and config to a completely new syntax/concept and invest the necessary time and brainpower to learn everything related.

      • Oinks@lemmy.blahaj.zone
        link
        fedilink
        arrow-up
        4
        ·
        15 hours ago

        That’s not entirely true, unless you choose to nixify everything. You can just have a basic Nix configuration that installs whatever programs you need, then use stow (or whatever symlink manager you prefer) to manage the rest of your config.

        You can’t entirely forget that you’re on NixOS because of FHS noncompliance but even then getting nix-ld to work doesn’t require a lot of effort.

        • ruffsl@programming.devOP
          link
          fedilink
          English
          arrow-up
          4
          ·
          11 hours ago

          nix-ld has been really helpful. I wish there were some automated tools where you could feed at the binary, or a directory of binaries, and it would just return all of the nix package names you should add include with nix-ld.

          Also if there were some additional flags to filter out redundant packages because of overlapping recursive dependencies or suggest a decent scope of meta package to start with for desktop environments, that’d be handy.

      • ruffsl@programming.devOP
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        1
        ·
        22 hours ago

        It’s a steep learning curve, but because much of the community publishes their personal configs, I find it a lot simpler to browse public code repos with complete declarative examples to achieve a desired setup than it is to follow meandering tutorials that subtly gloss over steps or omit prerequisite assumptions and initial conditions.

        There are also plenty of outcroppings and plateaus buttressing the learning cliff that one can comfortably camp at. Once you’ve got a working MVP to boot and play from, you can experiment and explore at your own pace, and just reach for escape hatches like dev containers, flatpacks or appomages when you don’t feel like the juice is worth the squeeze just yet.

        • lad@programming.dev
          link
          fedilink
          English
          arrow-up
          1
          ·
          18 hours ago

          Community publishing the configs sometimes confuses even more, because everyone does the same things differently, and some are deprecated, and some are experimental, and I was lost way more times than once while trying to make sense of it.

          I like Nix, and I use it on my Mac and in our production for cross-compiling a service, but man is it a pain to fix issues. That is beside the point that for some reason Nix behaves a bit different on my machine and on co-workers’, and the only thing I wanted from it is to be absolutely reproducible

          • ruffsl@programming.devOP
            link
            fedilink
            English
            arrow-up
            2
            ·
            10 hours ago

            Yep, with a Turing-complete DSL, there’s never just one way to do something in Nix. I find the interaction between modules and overlays particularly quirky, and tricky to replicate from public configs that make advance uses of both.

            That said, I do appreciate being able to git blame into public configs, as most will include insightful commit messages or references to ticketed issues that include more discussion with informed community members you can follow up with. Being able to peek at how others fixed something before and after helps give context, and with the commits being timestamped, it also helps gauge current relevancy or chronological order to correlate with upstream changelogs.

            Are you using flakes with lock files, or nixpins to fix down the hashes of your nix channel inputs? I like fixating my machines to the same exact inputs so that my desktop can serve as a warm local cache when upgrading my laptop.

            • lad@programming.dev
              link
              fedilink
              English
              arrow-up
              2
              ·
              8 hours ago

              Personally I use flakes.

              On the work we use an abomination that creates flake.lock but then parses it and uses to pin versions, it took me a while to realise this is why setting a flake input to something local never seemed to have any effect, for instance

                • lad@programming.dev
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  ·
                  7 hours ago

                  I think, it’s based on an old flake-compat package or something. It’s not inherently bad, but it displays what I dislike the most about Nix design, it’s very opaque and magical until you go out of your way to understand it.

                  The globals are another example of this, I know I can do with something; [ other ] but I am never sure if other comes from something or not. And if it’s a package parameter, the values also come seemingly out of nowhere.

      • dustyData@lemmy.world
        link
        fedilink
        arrow-up
        9
        arrow-down
        6
        ·
        1 day ago

        And your extraordinary result after all that is… exactly what you would’ve gotten in a few minutes downloading another distro.

        • ruffsl@programming.devOP
          link
          fedilink
          English
          arrow-up
          5
          ·
          23 hours ago

          However, you then don’t have to mentally remember every change you made when you eventually migrate to a new machine or replicate your setup across your laptop and desktop while keeping them synchronized. It takes me a few hours to setup and verify that everything is how I need on a normal distro, though that may be a byproduct of my system requirements. Re-patching and packaging kernel modules on Debian for odd hardware is not fun, nor is manually fixing udev and firewall rules for the same projects again and again.

          • dinckel@lemmy.world
            link
            fedilink
            arrow-up
            8
            ·
            22 hours ago

            This is what people don’t fully understand. Last week I was setting up a new machine. All it took was 1 command, and it was in the fully identical state to my main, not even 10 minutes later. No manual dotfiles, no install scripts, no anything

            • Ŝan@piefed.zip
              link
              fedilink
              English
              arrow-up
              4
              arrow-down
              1
              ·
              19 hours ago

              Þis is such an interesting use case which I completely don’t understand.

              Every time I set up a new machine, it has different configurations. I’m not setting up postfix or Caddy on every server I stand; I certainly don’t want all of þe software I install on my desktop to be installed on my servers, and my desktop has a wildly different configuration þan my laptop (which is optimized for battery life). Even in corporate, “cloning” systems are an exception raþer þan a rule, IME.

              I have an rsync config for þe few $HOME þings þat get cloned, but most of þose experience drift based on demands of þe system. Sure, .gnupg and .ssh are invariable, but .zshrc and even .tmux.conf are often customized for þe machine. Oþer þan þat, þere are only a handful of software packages I consistently install everywhere: yay, helix, zsh, mosh, tmux, ripgrep, fd, gnupg, Mercurial, and Go. I mean, maybe a couple more, but no more þan a dozen; I’ve never felt a need for an entire OS to run a single yay -S command.

              Firewalls differ on nearly every machine. Wireguard configs absolutely differ on every machine. Þe differences are more common þan þe similarities.

              I completely believe þat you find cloning useful; I struggle to imagine how, where puppet wouldn’t work better. Can you clarify how your environment benefits from cloning like þis? I feel as if I’m missing a key puzzle piece.

              • ruffsl@programming.devOP
                link
                fedilink
                English
                arrow-up
                2
                ·
                11 hours ago

                Let’s say you’re building a gaming desktop, and after a day of experimentation with steam, wine, and mods, you finally have everything running smoothly except for HDR and VRR. While you still remember all your changes, you commit your setup commands to a puppet or chef config file. Later you use puppet to clone your setup onto your laptop, only to realize installing gamescope and some other random packages were the source of VRR issues, as your DE also works fine with HDR natively. So you removed them from the package list in the puppet file, but then have to express some complex logic to opportunistically remove the set of conflicting packages if already, so that you don’t have to manually fix every machine you apply your puppet script too. Rinse and repeat for every other small paper cut.

                I find a declarative DSL easier to work with and manage system state than a sequence of instructions from arbitrary initial conditions, as removing a package or module in Nix config effectively reverts it from your system, making experimentation much simpler and without unforeseen side effects. I don’t even use Nix to manage my home or dot files yet, as simply having a deterministic system install to build on top of has been helpful enough.

                • Ŝan@piefed.zip
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  9 hours ago

                  Interesting. I mostly handle þis sort of stuff wiþ a combination of snapper and Stow. I can see how you might prefer doing all of þat work up front, þough.

              • dinckel@lemmy.world
                link
                fedilink
                arrow-up
                4
                ·
                17 hours ago

                You have another misconception entirely misleading your understanding of what’s possible here. Just because I said i’ve setup an exact clone, it doesn’t mean that’s the only way to set it up. My configuration manages 6 different machines, all with different options

          • dustyData@lemmy.world
            link
            fedilink
            arrow-up
            4
            ·
            20 hours ago

            I was mostly joking, of course. I appreciate the use case. It’s just that 99% of people are spinning new machines once every decade. Having a reproducible setup is something of interest for a very narrow band of system managers.

            I truly believe that for those who are spinning new hardware every day and need an ideal setup every time, a system image is far more practical. With much more robust tooling available. I’ve read other replies and for them all, I notice that using Universal Blue to package and deploy a system image would take a tiny fraction of the time it takes just learning Nix basic syntax. It’s so niche it seems almost not worth any of the effort to learn.

            • lad@programming.dev
              link
              fedilink
              English
              arrow-up
              2
              ·
              8 hours ago

              Sometimes it’s also the updates, rolling back a failed update is much simpler with Nix even if it took some elaborate set-up. This might be not wildly useful but it happens more often than spinning up a new machine entirely

            • ruffsl@programming.devOP
              link
              fedilink
              English
              arrow-up
              2
              ·
              11 hours ago

              I think the other 99% would appreciate having some deterministic config, and not necessarily even using Nix either.

              I’m kind of perplexed as to why no other distro hasn’t already supported something similar. Instead of necessitating file system level disk snapshots, if the OS is already fully aware of what packages the user has installed, chron jobs and systemd services they’ve scheduled, desktop environment settings and XDG dot files, any Debian or Fedora based distro could already export something like a archive tarball that encapsulates 99% of your system, that could still probably fit on a floppy disk. Users could backup that file up regularly with their other photos and documents, and simplify system restoration if ever they get their laptops stolen or their hard drive crashes.

              I think Apple and Android ecosystems already support this level of system restoration natively, and I think it’d be cool if Linux desktops in general could implement the same user ergonomics.

              • dustyData@lemmy.world
                link
                fedilink
                arrow-up
                1
                ·
                8 hours ago

                That would be super rad. But it is also the kind of things that only a tiny group of people like us enjoy tinkering with. The average computer user has no interest whatsoever on being a sysadmin. If the service is offered and neatly package, they will use and enjoy it. But Nix manages to be even more user hostile than old package manamegement style.

          • gudu@programming.dev
            link
            fedilink
            English
            arrow-up
            3
            ·
            21 hours ago

            Same story. Ssd of my machine for work crashed and after the replacement I was ready for work with everything customized and configured 30 minutes later.

            A new node for my cluster arrives? 30 minutes later the new one is setup and integrated in my k8s home setup. Reusing complete profiles combined with files for hardware specifics.

            I can even upgrades major versions fearlessly and had 0 problems the last years.

  • hisao@ani.social
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    2
    ·
    1 day ago

    For me, NixOS feels like something from the 2010s. I used it a bit about a decade ago. It’s great and powerful, but still pretty niche and not for everyone. Right now I’m on Bazzite, which seems to aim for the same goals but in a much easier and more forgiving way.

    If I really need to overlay something onto the system, I can use rpm-ostree, but that’s rare since almost everything I need runs fine in toolbox or distrobox. Using those is super easy and forgiving—it’s basically like having super-efficient containers where you can mess around without worrying about breaking the host OS.

    Personally, I mostly stick to a single Ubuntu distrobox, where I build graphical/audio/gaming apps from source and just launch them directly from the container—they work perfectly. Distrobox feels like having as many Debians, Arch installs, or Fedoras as you want, all running at near-native efficiency. Toolbox is similar, but I use it more for system-level stuff that would otherwise require rpm-ostree —like being able to run dnf in a sandboxed way that can’t mess anything up.

    • ruffsl@programming.devOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      22 hours ago

      How does distrobox implement display forwarding? Does it support Wayland, or is it using bind mounts for xauth and X11 unix sockets?

      With approach does it use for hardware acceleration? Does it abstract over Open Container Initiative’s plugin system, e.g. Nvidia container tool kit or AMD’s equivalent?

      Is it inconvenient if any of your applications use shared memory, like many middleware transports used for robotics or machine learning software development?

      I’m more familiar with plain docker and dev containers, but am interested in learning more about distrobox (and toolbox?) as another escape hatch while working with NixOS.

      • hisao@ani.social
        link
        fedilink
        English
        arrow-up
        3
        ·
        16 hours ago

        Distrobox uses bind mounts by default to integrate with the host: X11 and Wayland sockets for display, PulseAudio/PipeWire sockets for audio, /dev/dri for GPU acceleration, and /dev/shm for shared memory. On NVIDIA systems it relies on the standard NVIDIA container toolkit, while AMD/Intel GPUs just work with Mesa. Compared to plain Docker, where you usually have to manually mount X11/Wayland sockets, Pulse/PA sockets, /dev/shm, and GPU devices, Distrobox automates all of this so GUI, audio, and hardware-accelerated apps run at near-native efficiency out of the box. Toolbox works the same way but is more tailored for Fedora/rpm-ostree systems, while Distrobox is distro-agnostic and more flexible.

        • ruffsl@programming.devOP
          link
          fedilink
          English
          arrow-up
          2
          ·
          11 hours ago

          Thank you for the detailed reply, much appreciated!

          Any rough edges you’ve encountered yet? Like using USB peripherals, or networking shenanigans? I’m assuming it’s using the host network driver by default, and maybe bind mounting /dev/bus/usb for USB pass through?

          Think I’ll really dig into distrobox today.

          • hisao@ani.social
            link
            fedilink
            English
            arrow-up
            2
            ·
            10 hours ago

            Any rough edges you’ve encountered yet?

            No problems so far, but I didn’t try anything USB-related. Two of the more interesting programs I use it actively for are Ubuntu distrobox for Ultimate Doom Builder (level editor, works with GPU) and toolbox for natpmpc (utility for port-forwarding). I made a systemd service on my host system that calls toolbox run natpmpc -a 1 0 tcp 60 -g "$GATEWAY" 2>/dev/null in a loop to establish port-forwarding for my ProtonVPN connection (running on the host ofc), parses the assigned port and calls qbittorrent’s web api to set forwarded port there.