PIVOT! (Again) — Remote Desktop on NixOS Without the RustDesk Drama — Part 14 of Building a Resilient Home Server Series
Where We Left Off
Part 13 ended with the VM up, the ISO built, and the file sitting on \nixos2\isos visible from Windows. Clean stopping point. Then it hit me.
I have a server running VMs. I have SSH. What I don't have is any way to see what's actually happening on the desktop — inside a VM, at the console, wherever — without walking over and plugging in a monitor. That's fine until it isn't. The moment you want to watch a VM boot, or poke at something in the LXQT session, or verify that virt-manager actually looks the way you think it does, SSH isn't enough.
Back in Part 2, I tried to solve exactly this problem. The section was even called "RustDesk: The Build That Wouldn't." I pivoted to SSH and filed "deal with the desktop later" under future problems. Part 14 is future problems.
Did RustDesk Ever Figure Themselves Out?
This felt worth checking before assuming. The short answer: still flaky for unattended/service mode on NixOS. The -auth guess workaround still breaks because its helper scripts expect awk and netstat in PATH, which NixOS doesn't put there. The rustdesk --service daemon still calls sudo internally, which conflicts with how NixOS handles setuid. There are community workarounds, none of them clean, and none of them are the "add one package and it works" story I wanted.
PIVOT.
The Stack
x11vnc connects to the running X11 session. noVNC provides a browser-based VNC client. Nginx proxies it. Since nixos2 already has AdGuard handling DNS rewrites, Nginx with the virtualhost pattern established across 13 parts of this series, and Tailscale providing the secure remote path — the whole thing slots in as one more service. No new infrastructure. Just config.
modules/services.nix
Two new systemd services. x11vnc connects to the running LXQT/X display and listens on localhost only. websockify proxies WebSocket connections from noVNC to x11vnc:
nix
# ═══════════════════════════════════════════════════════════════════════════
# VNC / NOVNC - Remote Desktop Access
# ═══════════════════════════════════════════════════════════════════════════
systemd.services.x11vnc = {
enable = true;
description = "x11vnc VNC Server";
after = [ "display-manager.service" ];
wantedBy = [ "multi-user.target" ];
serviceConfig = {
Type = "simple";
ExecStart = "${pkgs.x11vnc}/bin/x11vnc -display :0 -auth /run/lightdm/root/:0 -forever -noxdamage -repeat -rfbauth /etc/nixos/private/vncpasswd -rfbport 5900 -shared -localhost";
Restart = "on-failure";
RestartSec = "5s";
};
};
systemd.services.novnc = {
enable = true;
description = "noVNC Web Client";
after = [ "x11vnc.service" ];
wantedBy = [ "multi-user.target" ];
serviceConfig = {
Type = "simple";
ExecStart = "${pkgs.python3Packages.websockify}/bin/websockify --web ${pkgs.novnc}/share/webapps/novnc localhost:6080 localhost:5900";
Restart = "on-failure";
RestartSec = "5s";
};
};
A few things worth noting here.
-auth /run/lightdm/root/:0 is not negotiable. The obvious flag is -auth guess, which sounds like it would figure things out automatically. It does not. Its helper scripts call awk and netstat which don't exist in the NixOS service PATH. It fails silently in a loop, x11vnc never actually binds to 5900, and websockify has nothing to connect to. The auth file for LightDM lives at /run/lightdm/root/:0. Use that directly.
-localhost means x11vnc never exposes VNC to the network. Only websockify can reach it, and websockify is only reachable through Nginx, which is only reachable over LAN or Tailscale. The whole chain stays internal.
The websockify ExecStart references ${pkgs.novnc} for the web root. noVNC's HTML client files live at /share/webapps/novnc inside that package — websockify serves them statically and proxies WebSocket connections through to x11vnc. One process, two jobs.
modules/system.nix
Add to environment.systemPackages:
nix
x11vnc
novnc doesn't need to be here — it's referenced via ${pkgs.novnc} in the websockify ExecStart and Nix resolves it at build time without it being in systemPackages.
modules/nginx-virtualhosts.nix
Same pattern as every other service in this setup:
nix
"vnc2.home" = {
locations."/" = {
proxyPass = "http://localhost:6080";
proxyWebsockets = true;
extraConfig = ''
proxy_read_timeout 3600s;
proxy_send_timeout 3600s;
'';
};
};
```
`proxyWebsockets = true` is not optional. noVNC communicates over WebSockets. Without it the connection reaches the page and then silently fails. The extended timeouts keep the connection alive during an idle session.
## AdGuard DNS Rewrites
Same as every other service. I set this up on both servers — `vnc.home` for nixos and `vnc2.home` for nixos2:
```
vnc.home → 192.168.1.x (nixos)
vnc2.home → 192.168.1.x (nixos2)The config is identical on both — same services, same nginx virtualhost pattern, just different hostnames. nixos2 got it first since that's where the VM work lives and where it was most immediately useful.
The One Manual Step
Everything else is declarative. This one isn't. Before rebuilding, create the VNC password file:
bash
nix-shell -p x11vnc --run "x11vnc -storepasswd /etc/nixos/private/vncpasswd"
It's a binary format, not plaintext like the other files in /etc/nixos/private/. x11vnc has to generate it. Do this over SSH before the first rebuild or the x11vnc service will start and immediately fail to authenticate anything.
Write it down somewhere. This is the step you'll forget on a fresh rebuild.
Monitoring
One line added to the blackbox targets in modules/monitoring.nix. If noVNC responds, the whole stack is working:
nix
targets = lib.flatten [
(lib.optional config.services.syncthing.enable "http://127.0.0.1:8384")
(lib.optional (config.services.adguardhome.enable or false) "http://127.0.0.1:3000")
(lib.optional (config.services.vaultwarden.enable or false) "https://${secrets.tailscaleHostname}")
(lib.optional (config.systemd.services ? x11vnc) "http://127.0.0.1:6080") # ← new
];
Homepage Entry
Into the Services section alongside everything else:
nix
++ (lib.optionals (config.systemd.services ? x11vnc) [
{
"Remote Desktop" = {
description = "noVNC remote desktop";
href = "http://vnc2.home/vnc.html";
icon = "mdi-monitor-screenshot";
ping = "http://127.0.0.1:6080";
};
}
])
```
## Things That Went Wrong
`-auth guess` looks like the right flag. It is not, on NixOS, for the reasons already covered. The symptom is x11vnc running and logging `--- x11vnc loop: sleeping 2000 ms ---` forever without ever binding to 5900. `ss -tlnp | grep 5900` returning nothing is the tell.
The novnc service initially had the wrong ExecStart — a copy/paste casualty from an earlier iteration left it running x11vnc again instead of websockify. `systemctl status novnc` showing an x11vnc process in the cgroup was the giveaway. Worth checking if the connection fails and both services claim to be running.
`405 Method Not Allowed` from the browser means you hit websockify directly without the noVNC HTML client in front of it. That's a missing `--web` flag on the websockify ExecStart, or hitting the wrong URL. Use `/vnc.html` not just `/`.
noVNC uses WebSockets. If Nginx is proxying it without `proxyWebsockets = true`, the page loads and then the connection fails with no useful error. The fix is one line in the virtualhost config.
## How It Went
Rebuild. Create password file. Restart x11vnc. Check `ss -tlnp | grep 5900` — listening. Hit `vnc2.home/vnc.html` in the browser. Enter password. LXQT desktop.
That's it. The whole thing that took days of fighting in Part 2, that I'd been putting off for 12 parts since, done in an afternoon. The difference is 13 parts of infrastructure sitting underneath it — Nginx already wired up, AdGuard already handling DNS, Tailscale already securing the path, monitoring already watching everything. New service just slots in.
It went so smoothly that I took ten minutes and added it to nixos as well. Same config, `vnc.home` instead of `vnc2.home`, done. Both servers have remote desktop access now. The infrastructure either works or it doesn't — turns out it works.
---
## The ISO Was the Point, So Let's Actually Use It
The VM is up, noVNC is working, the desktop is accessible. The original reason any of this exists is to build ISOs. So let's build one and actually test it.
`build-iso.sh` ran inside the iso-builder VM. ISO landed on `\nixos2\isos` via virtiofs exactly as designed. Booted it in a fresh VM. And immediately hit problems. Several of them. Problems that had been quietly hiding for a while.
## Things That Were Hiding in the Config
### The Missing Imports
First error out of the gate on the fresh install attempt:
```
'nginx-virtualhosts.nix' is too short to be a valid store path
Not a missing file. A path resolution failure. The install configuration's imports block had grown over time — timemachine.nix, nginx-virtualhosts.nix, vm.nix, backups.nix, homepage.nix — but nobody had updated the list in configuration-uefi.nix. The newer modules just weren't there.
Adding them to the imports block was the obvious fix. But the nginx-virtualhosts.nix error persisted through a fresh ISO build. The reason this had never been caught: the old structure had some modules importing other modules internally, which masked gaps in the explicit imports list. When I consolidated everything into one clean imports block in configuration-uefi.nix — one place, all modules, no nested imports hunting — the missing entries became immediately visible. The nginx-virtualhosts one just happened to blow up loudest because the relative path issue was sitting on top of it.
services.nix had a relative import:
nix
"./nginx-virtualhosts.nix"
```
That resolves fine from `/etc/nixos`. From the Nix store during an install evaluation it has nowhere to go. Fix is removing it from `services.nix` entirely — `configuration-uefi.nix` already imports it directly. It doesn't need to be in both places.
### The Boot Config Copy-Paste Error
After the import fix, fresh ISO, fresh install attempt. New error:
```
grub-install: error: cannot find a GRUB drive for /dev/sda
GRUB. On a UEFI VM. The imports list in configuration-uefi.nix told the whole story:
nix
"${modulesDir}/boot-bios.nix" # ← UEFI boot configuration
```
The comment said UEFI. The filename said BIOS. Classic copy-paste — the comment got updated, the filename didn't. It had never been caught because the old dev machine needed BIOS anyway. VirtualBox on Windows, BIOS mode, `boot-bios.nix` was correct and nobody had reason to look at it twice. Swap `boot-bios.nix` for `boot-uefi.nix`, rebuild the ISO.
### The Samba Migration
While sorting the boot config, `nixos-rebuild switch` on nixos2 itself hit a new error:
```
The option definition `services.samba.extraConfig' in timemachine.nix no longer has any effect; please remove it.services.samba.extraConfig was removed on unstable. The global Samba settings that used to live in a freeform string block now need to be in services.samba.settings.global as proper key-value pairs. securityType = "user" as a top-level option is gone too — security = "user" now lives inside the global block.
This also surfaced that services.samba was fully defined in both services.nix and timemachine.nix, each with their own global block. NixOS merges settings across modules so it wasn't two separate Samba instances, but it was two conflicting values for fruit:time machine max size — 1500G vs 2000G. Time to tidy up properly.
The split now is clean. services.nix owns the full Samba config — global settings, samba-wsdd, the isos share:
nix
services.samba = {
enable = true;
settings = {
global = {
"workgroup" = "WORKGROUP";
"server string" = "nixos2";
"server role" = "standalone server";
"security" = "user";
"fruit:metadata" = "stream";
"fruit:model" = "MacSamba";
"fruit:posix_rename" = "yes";
"fruit:veto_appledouble" = "no";
"fruit:wipe_intentionally_left_blank_rfork" = "yes";
"fruit:delete_empty_adfiles" = "yes";
};
isos = {
"path" = "/mnt/nextcloud-data/isos";
"comment" = "NixOS ISO builds";
"browseable" = "yes";
"writable" = "yes";
"valid users" = "ppb1701";
"create mask" = "0644";
"directory mask" = "0755";
"vfs objects" = "catia";
};
};
};
services.samba-wsdd.enable = true;timemachine.nix holds only what's specific to Time Machine — the share definition, tmuser, and the directory:
nix
services.samba = {
enable = true;
settings = {
timemachine = {
"path" = "/mnt/nextcloud-data/timemachine";
"browseable" = "yes";
"writable" = "yes";
"valid users" = "tmuser";
"vfs objects" = "catia fruit streams_xattr";
"fruit:time machine" = "yes";
"fruit:time machine max size" = "2000G";
};
};
};NixOS merges the settings from both files into one running Samba instance. Each file owns what's relevant to it.
It Booted
Fresh ISO with all the fixes in. Installed. Booted clean. The configuration that lands on a fresh install now actually matches what's in the repo — all modules imported, boot config matches the firmware, Samba migration done.
Lessons Learned
-
-auth guessdoes not work on NixOS. Use-auth /run/lightdm/root/:0and skip the debugging session. -
proxyWebsockets = truein Nginx is not optional for noVNC. The page loads without it. The connection doesn't. -
The vncpasswd file is binary, not plaintext. Generate it with
x11vnc -storepasswdbefore the first rebuild. Document it somewhere you'll find it. - Test an actual install periodically. Broken imports can hide in your config for months if the service they belong to is disabled during every build that would have caught them. The only way to know the ISO actually works is to use it.
- Read both the comment and the filename. They should match.
-
services.samba.extraConfigis gone on unstable. Everything moves toservices.samba.settings.global. The error message at least tells you exactly what to fix. - When the infrastructure exists, adding things is easy. noVNC took an afternoon. In Part 2 it was days and I gave up. Same goal, completely different experience because of what's underneath it now.
What's Next
The ISO builds, installs, and boots correctly. The desktop is accessible without a monitor. The stack is in a genuinely good place.
The imports situation also flagged something worth cleaning up — both boot configs and the main config maintain their own imports lists, which means adding a new module is currently a multi-file edit. Probably abstracting that to a shared imports.nix at some point — one edit to add a module instead of hunting through multiple files, and it'd make a future flakes migration considerably less annoying too.
Part 15 will be whatever breaks first or whatever I decide to poke at next. At this rate, probably both simultaneously.
Find me at @ppb1701@ppb.social on Mastodon if you're following along, or if -auth guess just cost you an afternoon.
Main Server (nixos): Codeberg
Second Server (nixos2): Codeberg
ISO can be gotten here.