[-] [email protected] 1 points 3 days ago

What do you mean that a file deduplication will take forever if there are duplicated directories? That the scan will take forever or that manual confirmation will take forever?

[-] [email protected] 1 points 3 days ago

That sounds doable. I would however not trust my self to code something bug free on the first go xD

[-] [email protected] 1 points 4 days ago

This will indeed save space but I don't want links either. I unique files

[-] [email protected] 1 points 4 days ago

I had multiple systems which at some point were syncing with syncthing but over time I stopped using my desktop computer and syncthing service got unmaintained. I've had to remove the ssd of the old desktop so I yoinked the home directory and saved it into my laptop. As you can probably tell, a lot of stuff got duplicated and a lot of stuff got diverged over time. My idea is that I would merge everything into my laptops home directory, and rather then look at the diverged files manually as it would be less work. I don't think doing a backup with all my redundant files will be a good idea as the initial backup will include other backups and a lot of duplicated files.

[-] [email protected] -1 points 4 days ago

I did not ask for a backup solution, but for a deduplication tool

82
Deduplication tool (lemmy.world)
submitted 5 days ago* (last edited 3 days ago) by [email protected] to c/[email protected]

I'm in the process of starting a proper backup solution however over the years I've had a few copy-paste home directory from different systems as a quick and dirty solution. Now I have to pay my technical debt and remove the duplicates. I'm looking for a deduplication tool.

  • accept a destination directory
  • source locations should be deleted after the operation
  • if files content is the same then delete the redundant copy
  • if files content is different, move and change the name to avoid name collision I tried doing it in nautilus but it does not look at the files content, only the file name. Eg if two photos have the same content but different name then it will also create a redundant copy.

Edit: Some comments suggested using btrfs' feature duperemove. This will replace the same file content with points to the same location. This is not what I intend, I intend to remove the redundant files completely.

Edit 2: Another quite cool solution is to use hardlinks. It will replace all occurances of the same data with a hardlink. Then the redundant directories can be traversed and whatever is a link can be deleted. The remaining files will be unique. I'm not going for this myself as I don't trust my self to write a bug free implementation.

[-] [email protected] 15 points 4 weeks ago

The exiting part will be if they launch a passive cooled arm based laptop.

[-] [email protected] 4 points 4 weeks ago

If you can get a metal body laptop, I would suggest you do. Metal chassis with Linux will last a long while. Programming will not take much resources (and if it does, rewrite your code). Since you're into light programming like python any distro would be fine. It feels like the community has somewhat agreed to suggest Linux Mint to new users so I'll support that.

[-] [email protected] 2 points 1 month ago

Yeah true, though it's dealt with already. Time to put the lid back on that can.

[-] [email protected] 1 points 1 month ago* (last edited 1 month ago)

hahaha It actually did, I found out shortly after initially posting this. I'm constantly reminded that I haven't learned reading yet (documentation, datasheets, terminal output etc...)

[-] [email protected] 1 points 1 month ago* (last edited 1 month ago)

I usually try to avoid bad habits like this but this time it was justified.

The Ubuntu laptop had to connect to company vpn. It were using openconnect-network-manager-gnome thingy to do that. Recently the company upgraded their vpn software which is sorta incompatible with openconnect and requires a modified user agent string for it to prompt for 2FA keys. package in ubuntu 22.04 is too old to modify that in the gui. I tried in the terminal manually, editing the config manually with vim and even dumping the config from my personal Arch laptop. We also tried proprietary Cisco AnyConnect but there is probably a server misconfiguration which causes the connection to drop and reconnect once a minute. In Ubuntu 24.04 it works given the user agent modification, and even though it was released a couple of weeks ago, LTS users don't get the update before mid August. So the easiest solution was to take the software compile it in the VM and use it there. It's a temporary solution but we had to have something working by the next morning. With such setup it's an annoyance to have password prompts show up. On top of that the keyboard is kinda fucked and some characters register multiple times making the situation with passwords even worse.

If you have a good idea what I could have tried let me know, love to hear new ideas.

28
submitted 1 month ago* (last edited 1 month ago) by [email protected] to c/[email protected]

I've run passwd and sudo su; passwd to change password for root and my account. Password is set correctly when using sudo and su but whenever I get prompted by pkexec it accepts only the old password. I've rebooted my system to make sure it was not an issue.

Edit: Solved Turns out the password were changed for root account but not my user account. I think the reason is that there are no password quality requirements on root accounts, but there are on the default account in ubuntu. Changing the password from root account passwd user worked fine.

[-] [email protected] 1 points 9 months ago

Do anyone know if its open source by any chance?

25
submitted 10 months ago* (last edited 10 months ago) by [email protected] to c/[email protected]

In my dmesg logs I get following errors a lot:

[232671.710741] BTRFS warning (device nvme0n1p2): csum failed root 257 ino 2496314 off 946159616 csum 0xb7eb9798 expected csum 0x3803f9f6 mirror 1
[232671.710746] BTRFS error (device nvme0n1p2): bdev /dev/nvme0n1p2 errs: wr 0, rd 0, flush 0, corrupt 19297, gen 0
[232673.984324] BTRFS warning (device nvme0n1p2): csum failed root 257 ino 2496314 off 946159616 csum 0xb7eb9798 expected csum 0x3803f9f6 mirror 1
[232673.984329] BTRFS error (device nvme0n1p2): bdev /dev/nvme0n1p2 errs: wr 0, rd 0, flush 0, corrupt 19298, gen 0
[232673.988851] BTRFS warning (device nvme0n1p2): csum failed root 257 ino 2496314 off 946159616 csum 0xb7eb9798 expected csum 0x3803f9f6 mirror 1

I've run btrfs scrub start -Bd /home as described here. The report afterwards claim everything is fine.

btrfs scrub status /home
UUID:             145c0d63-05f8-43a2-934b-7583cb5f6100
Scrub started:    Fri Aug  4 11:35:19 2023
Status:           finished
Duration:         0:07:49
Total to scrub:   480.21GiB
Rate:             1.02GiB/s
Error summary:    no errors found
1
Reset flash drive (lemmy.world)
submitted 11 months ago by [email protected] to c/[email protected]

I've been messing with my flash drives trying to follow some random documentation with dd and now both of my flash drives are reporting 0 bytes of free space. I was trying to clear out everything and start from scratch as if they were new. I wonder if there are any program out there that can just sudo reset-everything /dev/sdX

1
submitted 11 months ago by [email protected] to c/[email protected]

I'm experiencing an issue with commands that provide a tui interface like journalctl, systemctl and vim. It feels like terminal dimensions are not matching up somehow. And this issue is present only some times. On host I'm using Black Box and I tile my windows using pop os tiler. I'm also frequently scaling the font with ctrl + and ctrl - shortcuts. Remote sshd host is running Debian variables $LINES and $COLUMNS are set. bashrc files are in their default state.

How is this supposed to work? Isn't my terminal client sending new $LINES and $COLUMNS each time there is a change?

view more: next ›

Agility0971

joined 1 year ago