this post was submitted on 05 Aug 2023
120 points (97.6% liked)

Linux

45595 readers
668 users here now

From Wikipedia, the free encyclopedia

Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).

Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.

Rules

Related Communities

Community icon by Alpár-Etele Méder, licensed under CC BY 3.0

founded 5 years ago
MODERATORS
 

Hello everyone. I'm going to build a new PC soon and I'm trying to maximize its reliability all I can. I'm using Debian Bookworm. I have a 1TB M2 SSD to boot on and a 4TB SATA SSD for storage. My goal is for the computer to last at least 10 years. It's for personal use and work, playing games, making games, programming, drawing, 3d modelling etc.

I've been reading on filesystems and it seems like the best ones to preserve data if anything is lost or corrupted or went through a power outage are BTRFS and ZFS. However I've also read they have stability issues, unlike Ext4. It seems like a tradeoff then?

I've read that most of BTRFS's stability issues come from trying to do RAID5/6 on it, which I'll never do. Is everything else good enough? ZFS's stability issues seem to mostly come from it having out-of-tree kernel modules, but how much of a problem is this in real-life use?

So far I've been thinking of using BTRFS for the boot drive and ZFS for the storage drive. But maybe it's better to use BTRFS for both? I'll of course keep backups but I would still like to ensure I'll have to deal with stuff breaking as little as possible.

Thank you in advance for the advice.

(page 2) 8 comments
sorted by: hot top controversial new old
[–] [email protected] 1 points 11 months ago

I've been using ext4/btrfs for a long time,but recently I decided to give xfs a try and it feels pretty solid all rounder fs.

I know it's a very old and very well supported fs,developed by Silicon Graphics and has been getting constant improvements over time with various performance improvements andchecksuming. TBH,for my use casesanything would work but BTRFS snapshots were killing my storage and I got bored with the maintenance task.

Archwiki has amazing documentation for all FS,so might be worth a look.

[–] [email protected] 0 points 11 months ago* (last edited 11 months ago) (1 children)

Ten years is a long time. In ten years 4Tb storage will be less than a crappy thumb drive.

For reliant storage I personally would get two hdd for a price of one ssd, slap a software raid1 with ext4 on them and forget about them until the mdadm alerts

load more comments (1 replies)
[–] [email protected] -3 points 11 months ago* (last edited 11 months ago) (4 children)

This might be controversial here. But if reliability is your biggest concern, you really can't go wrong with:

  • A proper hardware RAID controller

You want something with patrol read, supercapacitor- or battery-backed cache/NVRAM, and a fast enough chipset/memory to keep up with the underlying drives.

  • LVM with snapshots

  • Ext4 or XFS

  • A basic UPS that you can monitor with NUT to safely shut down your system during an outage.

I would probably stick with ext4 for boot and XFS for data. They are both super reliable, and both are usually close to tied for general-purpose performance on modern kernels.

That's what we do in enterprise land. Keep it simple. Use discrete hardware/software components that do one thing and do it well.

I had decade-old servers with similar setups that were installed with Ubuntu 8.04 and upgraded all the way through 18.04 with minimal issues (the GRUB2 migration being one of the bigger pains). Granted, they went through plenty of hard drives. But some even got increased capacity along the way (you just replace them one at a time and let the RAID resilver in-between).

Edit to add: The only gotcha you really have to worry about is properly aligning the filesystem to the underlying RAID geometry (if the RAID controller doesn't expose it to the OS for you). But that's more important with striping.

load more comments (4 replies)
load more comments
view more: ‹ prev next ›