tarneo

joined 1 year ago
[–] [email protected] 2 points 10 months ago

Framaforms + framacalc.

[–] [email protected] 4 points 10 months ago

My servers have names of Spanish words humorist El Risitas says in his mythical video where he laughs with no real reason.

The biggest server is named "cocinero", because I can (jokingly) easily imagine a very fat cook.

Then there is plancha, a lenovo thinkcentre which has the size of a plank.

My raspberry pi's have names of tapas: chorizo, keso etc.

[–] [email protected] 1 points 10 months ago (1 children)

what is the reason you shy away from ubuntu? Canonical. Snaps. Ubuntu is the first server OS I used, and while it was quite good I think I prefer using a base distrobox instead of a derivative. If I'm going to use Debian, I'll use Debian. Not Debian with corporate stuff on top.

As for SELinux: I've tried around a year ago. But as soon as I started doing stuff with users and tweaking docker permissions things went wrong and I just set it to permissive. Maybe I'll try that again soon, because other parts of managing servers have become much easier over time as I learned. I agree that having a server without SELinux is quite dumb and not very professional.

[–] [email protected] 1 points 10 months ago

You convinced me for immutable fedora. Maybe I'll try it out sometime on our backup/testing server and maybe it will make its way to production if I'm happy with it.

As for distrobox I'll see.

The main reason I used Gentoo is because of being able to reduce the attack surface with USE flags. But as it seems the tradeoffs with it are greater than the advantages (the mastodon issue I mentioned). If I don't switch the server to immutable fedora, I'll just use something like plain fedora or debian I think.

[–] [email protected] 1 points 10 months ago (2 children)

To me, this is only one of the few advantages of immutability. I have already used nixOS on a server and I really didn't like having to learn how to do everything the right way. As for distrobox, to me it sounds quite like an additional failure point: it is an abstraction over the containers concept that hides the actual way it is done from you. I'd say if you run an app in a container, go all the way: make the container yourself. To me it just sounds like a bad idea, and I didn't really like distrobox when I tried it. I just want to say that both of these concepts (immutability, distrobox) would be great if it was perfectly done. But the learning curve of nixos and the wackiness of distrobox drove me away.

[–] [email protected] 6 points 10 months ago* (last edited 10 months ago)

On X11 Linux, install redshift with your package manager. Run it.

[–] [email protected] 4 points 10 months ago

Free software tells you "do whatever you want, you're free" but open source completely misses the point: it means you can read the code, but not necessarily recompile, modify and redistribute. Plus the term was invented for the confusion that would come from it. For example, a lot of AI models like LLM's claim they are "open-source", which basically means nothing: it's far easier to say that than to claim it's a free model, because that would imply freedoms to modify, reuse, redistribute the training data, weight etc. (no AI model allows that for now, and there will probably never be one that does).

[–] [email protected] 2 points 10 months ago (4 children)

I totally agree. But I just wouldn't necessarily say gentoo is a bleeding edge distro: it's kinda up to the user. They are free to configure the package manager (portage) however they want and can even do updates manually. I just like the idea of having newer packages at the cost of stability, because I also use the server as a shell account host (with an isolated user ;-)) and need things like the latest neovim. These days I would know if an update failed because I would literally be in front of the process and test services are working after the updates, so I'd know if I have to rollback. This makes it basically like a stable distro IMO (even though the packages aren't battle tested before being pushed as updates).

[–] [email protected] 5 points 10 months ago

Unattended updates are 10x better because those programs allow you to only do security updates. Plus they are much more stable, and something like this would never happen on a stable distro.

[–] [email protected] 5 points 10 months ago (6 children)

I'm surprised this strategy was approved for a public server

The goal was to avoid getting hacked on a server that could have many vulnerable services (there are more than 20 services on there). When I set this up I was basically freaked out by the fact I hadn't updated mastodon more than a week after the last critical vulnerability in it was found (arbitrary code execution on the server). The quantity of affected users, compared to the impact it would have if hacked, made me choose the option of auto-updates back then, even if I now agree it wasn't clever (and I ended up shooting myself I'm the foot). These days I just do updates semi-regularly and I am subscribed to mailing lists like oss-security to know there's a vulnerability as early as possible. Plus I am not the only person in charge anymore.

[–] [email protected] 3 points 10 months ago* (last edited 10 months ago)

That's what I learned :-)

Edit: no saying that isn't rude

[–] [email protected] 5 points 10 months ago (3 children)

Yup. gives nickel back

13
submitted 10 months ago* (last edited 10 months ago) by [email protected] to c/[email protected]
 

Tl;dr: Automatic updates on my home server caused 8 hours of downtime of all of renn.es' docker services including email and public websites

65
Best fortune (lemmy.ml)
submitted 10 months ago* (last edited 10 months ago) by [email protected] to c/[email protected]
 
#if _FP_W_TYPE_SIZE < 32
#error "Here's a nickel kid.  Go buy yourself a real computer."
#endif

-- /arch/sparc64/double.h

 
view more: next ›