I have written before about the horror that is
systemd. I was just bitten again.
I have a server that, in essence, is a network attached storage (NAS) device. It stores encrypted images of other servers around the net. I let it grow old running a long-term-supported version of Ubuntu (14.04). It doesn’t do anything fancy, and support for 14.04 runs out next year (I think), so it seemed like a good time to update it.
I brought it up to 18.04.1, a process which took many hours but was otherwise uneventful… except…
The encrypted storage never actually appeared on the network. Everything mounts, no errors, but then the newly-mounted volumes just aren’t there. Weird. I finally discovered that
systemd immediately and silently unmounts the volumes as soon as they’re mounted.
I discovered this bizarre thread describing the problem and the (total lack of) solution. In a nutshell, its:
User: The mount command appears to succeed, returns no errors, but the volumes are unmounted as soon as they’re mounted. That’s a bug. If you get no errors, your volumes should be mounted, period. If your volume isn’t mounted, mount should return an error.
systemd dev: No, it’s not a bug because we do actually mount the volume, we just immediately and invisibly and irrevocably unmount it.
Wow. The level of arrogance is astounding. That’s like waving money in front of someone you’re in debt to, then claiming that you paid them. So it isn’t fixed, and it isn’t likely to be fixed, and I am redoubling my efforts to move my servers to