

I am not familiar with deploying client side certificates unfortunately. I hope it works, if the certificate is at the OS level and the application will use it, I feel it will work… not sure, in-browser feels straight forward at least
I am not familiar with deploying client side certificates unfortunately. I hope it works, if the certificate is at the OS level and the application will use it, I feel it will work… not sure, in-browser feels straight forward at least
Reading jellyfin’s issues it’s clear its web ui and API cannot be allowed to talk to the general internet.
I’d push for a VPN solution first. Tailscale or wireguard. If you’re happy with cloudflare sniffing all traffic and that they make take it away suddenly someday use their tunnel with authentication.
The only other novel solution I’d suggest is putting jellyfin behind an Authentik wall (not OIDC, though you can use OIDC for users after the wall). That puts security on Authentik, and that’s their only job so hopefully that works. I’d use that if VPN (tailscale or wireguard) are problematic for access. The downside is that jellyfin apps will not be able to connect, only web browsers that can log into the Authentik web ui wall.
Flow would go caddy/other reverse proxy -> Authentik wall for jellyfin -> jellyfin
I’d put everything in docker, I’d put caddy and Authentik in a VM for a DMZ (incus + Zabbly repo web ui to manage the VM), I’d set all 3 in the compose to read-only, user:####:####, cap-drop all, no new privileges, limited named networks.
Podman quadlets would be even better security than docker, but there’s less help for that (for now). Do docker and get something working to start, then grow from there
I’m looking at Opnsense on an Incus VM soon, what was your fight there? Good to know what I’ll hit ;)
Agreed on that path - some networking (like mimicking proxmox’s bridge connections which give VMs their own MAC/IP) takes effort to find the solution. But the basic LXC/VM-shares-your-IP works super easily and the script ability is great. Plus it doesn’t feel like a yoke on your system that is heavy and drives it, but just another application! I feel it’s close enough, and when you get it where you want it, it’s perf. I assume they’ll get “one click” solutions for the harder stuff baked in as they get more attention and traction.
If you’ve got Debian already installed, I cannot resist advocating for Incus (stable branch from Zabbly repo with web ui https://blog.simos.info/how-to-install-and-setup-the-incus-web-ui/) in lieu of proxmox. Does the same thing but you don’t have to rip out the kernel Debian uses.
With Debian 13 you have access to podman quadlets, use that for any non-vm needs. The ease of docker compose files easily removes reason for programs in LXC containers, and podman removes reason for docker in an LXC. LXC is left only for programs that aren’t containerized. VMs for security DMZ. Podman for bulk of stuff you want.
Good luck!
Right right things don’t just have one… from searching I’ve found “SLAAC assisted mode” allows for the router to let SLAAC SLAAC while also being able to declare addresses for a server. Thanks for that tiny note!
I wanted Jellyfin on its own IP so I could think about implementing VLANs. I havent yet, and I’m not sure what I did is even needed. But I did do it! You very likely don’t need to do it.
There are likely guides on enabling Jellyfin hardware acceleration on your Asustor NAS - so just follow them!
I do try to set up separate networks for each service.
On one server I have a monolithic docker compose file with a ton of networks defined to keep services from talking to the internet or each other if it’s not useful (pdf converter is prevented from talking to the internet or the Authentik database, for example). Makes the most sense here, has the most power.
On this server I have each service split up with its own docker compose file. The network bit makes more sense on services that have an external database and other bits, it lets me set it up so only the service can talk to its database and its database cannot reach the internet at large (via adding a ‘internal: true’ to the networks: section). In this case, yes the pdf converter can talk to other services and I’d need to block its internet access at the router somehow.
The monolithic method gets more annoying to deal with with many services via virtue of a gigantic docker compose file and the up/down time (esp. for services that don’t acknowledge shutdown commands). But it lets me use fine-grained networking within the docker compose file.
For each service on its own, they expose a port and things talk to them from there. So instead of an internal docker network letting Authentik talk to a service, Authentik just looks up the address of the service. I don’t notice any difference in perceptible lag.
Good to know, didn’t know IPv6 can come with efficiency gains. Makes sense since the designers had a beat to think about why IPv4 sucks. I’ll avoid NAT IPv6
I got it, ULA for everything that doesn’t care, 1 GUA for the server. When everything else starts to care about the lack of IPv6 or has routing issues, convert the ULA to GUA and rock n roll.
Thanks for providing a sane way to approach it slowly and methodically!
I do appreciate you taking the time to write that up! Is the 50.50.0.0/22 crossing US and EU IPv4 allocations? From searching it looks like it’s around the boundary between US and Germany allocations. Interesting, I had no idea IP anonymization existed or was applied in such a haphazard way
Thanks for writing this up, really highlights the effective differences.
So for the internal delegation I’d SLAAC it and let things “just work” or DHCPv6 if I cared to specify IPv6s (which I will need to to have a static IPv6 address for a server to be reached at). Thanks again!
Thanks for taking the time to go into detail on this, it helps because I just haven’t been able to put acronyms to actionable meaning from just reading blogs and posts.
How do things outside the LAN talk to things inside the LAN that have ULA addresses (which I’m assuming are equivalent of 10.0.0.0/16 idea)? Will devices that are given ULA addresses be NAT’d just like IPv4 or will they not be able to talk to the outside world on IPv6?
Edit: I am getting more what you said; you answered this: the ULA addresses will not be able to talk to the outside world on IPv6 so those devices will be IPv4-only to websites that use IPv6 too. Follow-on Q would then be, is kludging NAT for IPv6 not a better solution versus ULA addresses? Or is the clear answer just use IPv6 as intended and let the devices handle their privacy with IPv6 privacy extensions?
I see now that a limitation I just understood for IPv4 (expose one port from one device only on the router) isn’t a thing for IPv6 working without NAT, every device on a LAN can be given a world wide routable address and expose the same port. Interesting, in my home I don’t think I’d ever run into that, but I can see issues like that pile up quick in big deployments.
Thanks for taking the time to explain all of this in detail!
I gather people talk like NAT is a rung of hell, but I guess it works because I never think of it. Maybe it becomes shittastic at multiple NATs? With one router it seems straight forward to have port forwarding.
I do not understand why I want better inbound connections - but maybe if I get hit with a cgnat then I’ll understand?
Mobile devices are largely IPv6-only now, messing with VPN to home. The IPv6-to-4 conversion seems to be shoddy for my mobile carrier.
Not here for what it represents, just want it to work.
I haven’t run into NAT issues that I’ve noticed, would IPv6 avoid issues with cgnat that people complain about? (If/when it happens in the future)
I know, but when you get captcha’d all of the time you feel like you’re kinda winning (but not really of course). I don’t want them to just have a nice fingerprint of my devices without having to try at all. I see others have mentioned “IPv6 privacy extensions” that let the devices cycle the multitude of IPv6 address space to keep a semblance of privacy - that seems to be the “default” solution
I see, I saw someone else mention “IPv6 privacy extensions”. So basically it’s up to the individual devices to handle privacy instead of the router doing it for them in IPv6 land
I had never picked up on this, thank you for name dropping what to look for!
I see people say “not worth it” but never expound on what exactly makes it not worth it?
Most I get is a vibe (using a metaphor) “python-like judging where people prefer to do it in a ‘pythonic’ way” but of course that’s silly. There must be more to it, but I never seen interoperability issues called out
Thank you for the guide! It’s very straightforward and looks hella easy to implement. From reading it I would not have guessed it would do what I wished
Thanks for the links! I had no idea there were special settings needed