this post was submitted on 14 Apr 2024
16 points (76.7% liked)

Linux

45573 readers
808 users here now

From Wikipedia, the free encyclopedia

Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).

Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.

Rules

Related Communities

Community icon by Alpár-Etele Méder, licensed under CC BY 3.0

founded 5 years ago
MODERATORS
 

Last night I was nearly tearing my hair out trying to get a crappy inkjet printer installed when I turned to ChatGPT for help; 5 minutes later it was done (driver issues).

I realised that I use it a lot for all sorts of things but haven’t really seen any posts on here about it, and was wondering if other people do the same.

I’ve used it to create bash scripts, write small, niche python programs (eg sort photos into separate folders based on their orientation), and create cron jobs among other things.

In particular, I find it useful when I want to use a command line program that has a complicated set of options / syntax, but I might only ever want to use it once, or very infrequently. So I tell it what I want to accomplish, and ask it to give me the correct command to run with the options I specify in plain language.

Example:

User

Give me the correct rsync command to recursively copy all files in /home/user/Documents that are image files of any type but were created more recently than 1/1/2024. The destination is /home/user/foo

ChatGPT

rsync -avm --include='*/' --include='*.jpg' --include='*.jpeg' --include='*.png' --include='*.gif' --include='*.bmp' --include='*.tiff' --include='*.svg' --exclude='*' --prune-empty-dirs --min-age=$(date -d '2024-01-01' +%s) /home/user/Documents/ /home/user/foo/

User

Break down the command and explain each part of it in dot point format.

ChatGPT

- rsync`: Invokes the rsync program, which is used for efficiently transferring and synchronizing files across systems or within a system.

  • -avm:
    • -a: Archive mode, which preserves permissions, symlinks, modification times, group, and ownership. Implicitly includes recursion.
    • -v: Verbose mode, which provides detailed output of what rsync is doing.
    • -m: Prune empty directory chains from the file-list.
  • --include='*/': Include directories themselves, which is necessary for recursion.
  • --include='*.jpg' --include='*.jpeg' --include='*.png' --include='*.gif' --include='*.bmp' --include='*.tiff' --include='*.svg': Include files that match these image file extensions.
  • --exclude='*': Exclude all other files that do not match the previously stated include patterns.
  • --prune-empty-dirs: Exclude empty directories that result from the inclusion/exclusion pattern.
  • --min-age=$(date -d '2024-01-01' +%s): Exclude files that are older than the timestamp for January 1, 2024. This uses date -d to convert a date string into a timestamp, which rsync uses to filter files.
  • /home/user/Documents/: Source directory from which files are being synchronized.
  • /home/user/foo/: Destination directory to which files are being synchronized.`

It’s also really good for explaining concepts / commands in plain language.

It’s like having a 24 hour on call Linux expert :)

#Things to note:

- Get the subscription. ChatGPT 3.5 is pretty useless. ChatGPT4 is fine, but I’m pretty sure you need the subscription to access it.

- Give it pre-instructions. I have told mine what distro, what shell I’m using and the make and model of my laptop. If you have a subscription you can add these as permanent pre-instructions, so to speak. That way it will be much more likely to give you correct answers.

- It’s not magic In order to get what you want, you have to be able to ask the right questions. It will boost your ability but won’t turn you in to a 1337 haxx0r

-Ask questions As always, don’t run any commands that you don’t understand. Ask it to break down any commands it tells you to run if you don’t understand them.

-Sometimes it goofs For various reasons, sometimes it will ask you to install a package that no longer exists, or will give you a command that doesn’t work. When that happens, I just feed the error message back into ChatGPT and it usually is able to correct itself.

-Ask “Is there a better or easier way to do this?” This is probably the most valuable question I’ve found to ask chatGPT. Sometimes it gets so far in the weeds looking for a solution to a problem that you need to pull back and start fresh.

top 49 comments
sorted by: hot top controversial new old
[–] [email protected] 14 points 2 months ago* (last edited 2 months ago) (1 children)

User: "ChatGPT, write me a script to clean up my hard disk on Linux"

ChatGPT: sudo rm -rf / 😁

[–] [email protected] 6 points 2 months ago

Squeaky clean 😅

[–] [email protected] 9 points 2 months ago (1 children)

Interesting post, but made me also think of this one : https://xkcd.com/1168/

[–] [email protected] 0 points 2 months ago (2 children)

What I've always wondered about that one is: why bother forbidding Google but not 'man tar'? 🤨

[–] [email protected] 3 points 2 months ago (1 children)

I interpret "use a valid tar command on your first try" as not allowing to run other commands before the tar command.

[–] [email protected] 1 points 2 months ago (1 children)

Surely the bomb isn't the only computer in the immediate area.

[–] [email protected] 1 points 2 months ago

It says you only have ten seconds, I doubt you could log onto another (Unix) computer in that time, open the terninal, run the man oage and then run over and enter a valid command...

[–] [email protected] 2 points 2 months ago (1 children)

😀 After seeing the comic for the first time I thought that the "UNIX" (TM) person simply could have gone for tar --help or tar --version as valid command to show off their "UNIX" skills and save all.

[–] [email protected] 4 points 2 months ago

Also surely a lot of people would know tar -Create Ze Vucking File and/or tar -Xtract Ze Vucking File

[–] [email protected] 8 points 2 months ago* (last edited 2 months ago) (3 children)

Chatgpt does not know truth. It does not know if the info it provides is true. It does not know if the code actually works. It just concatenates strings based on probability. You may be lucky or you aren't. The easier the task, the more likely it'll succeed. But a low difficulty is no guarantee for success.

It is great for layouts, structure and for the basic concept. "For loop in fish". But it may struggle to convert a video from x264 to av1 with ffmpeg. It depends on info that's provided online. If it uses misinformation, then that's in there as well.

The command you got is just wrong. What about avif, jxl or most other image formats? Use it, but think.

[–] [email protected] 1 points 2 months ago (1 children)

Note that sometimes Ai models check if code works by executing it. For example gemini can python function and execute it to write down the results

[–] [email protected] 0 points 2 months ago
[–] [email protected] -1 points 2 months ago* (last edited 2 months ago) (4 children)

I was wondering how long it would take the gatekeepers to show up. The command works, and is perfectly fine. If I had any uncommon formats, I would tell gpt to include them.

[–] [email protected] 3 points 2 months ago* (last edited 2 months ago) (1 children)

Example that confirms that "Chatgpt does not know truth. It does not know if the info it provides is true." or more like "It will spell answer that match your inquiry that sound correct even if it's totally made up."

https://chat.openai.com/share/206fd8e9-600c-43f8-95be-cb2888ccd259

Summary:

User
in `podman stats` you see BLOCK IO as a summary of hard drive activity.
how to reset the 

ChatGPT
To reset the block I/O statistics displayed by podman stats, you can use the podman stats --reset command.

User
Error: unknown flag: --reset

ChatGPT
Apologies for the confusion. It seems I provided incorrect information. The podman stats command does not have a built-in option to reset the statistics.

So once again, don't be afraid to use it, but do your own research especially if following LLM could result in something breaking both in tech or in life.

[–] [email protected] -5 points 2 months ago (1 children)

You left out the part where it then gave you the correct answer.

[–] [email protected] 3 points 2 months ago

I didn't left it, I needed provide that "part" to it to get the correct answer.

Because like in the whole thread is mentioned over and over again, chatgpt doesn't know the correct answer, it's a mathematical model of "what looks ok" and "what should be the next word", it looks ok to try to put --reset parameter to reset it, but because chatgpt can't actually check documentation of podman stats if the param exists, it just generate it based on "common known text patterns", and "common known text patterns" are written in a way suggesting that it is the truth.

So once again - do your own research if following the results it could cause breaking both in tech and especially in life. And that is true for both chatgpt and random pages on internet.

In this case I did exactly follow chatgpt answer without doing fact checking - I asked chatgpt, I copied the command and pasted it into terminal, because I know that if it didn't work the worse that could happen it would fail and do nothing. But It's bad for new people that will not know what the result could be if it's wrong!

@z00s Don't take me wrong. I'm not telling not to use it, on the contrary.

You should use any tool that helps you do your job/task. But you should try to understand how to use those tools wisely.

Telling someone never to use ChatGPT is like telling someone to never use excavator. That is wrong, you should use excavator but you should know what is an excavator, and what harm it could do by for example accidentally destroy a building or even hurt someone (or youself) if not use wisely.

[–] [email protected] 3 points 2 months ago (1 children)

I’m quite sure it won’t be long until some bad practice spreads like this. Giving clueless “Linux pros” top advice on how to enable a back door.

LLMs can be poisoned and as datasets increase and complexity grows it will be harder to contain.

Cgpt works great for some stuff, but all you know is that someone somewhere wrote something similar. They are no better than Google in predicting what is good material and what’s wrong, and training is statistics.

[–] [email protected] -3 points 2 months ago (1 children)

In order to poison a LLM, you'd need access to the training process, which is locked down by openai. Just posting false info on the net isn't enough. GPT doesn't simply repeat what's already been written.

More than that though, you can find plenty of wrong and bad advice posted confidently by legions of Linux gatekeepers on any forum.

Anyone who has ever spent any time on stack overflow will tell you why they'd rather talk to an LLM instead of posting there.

[–] [email protected] 3 points 2 months ago

chatgpt only generates text. that's how it was supposed to work. it doesn't care if the text it's generating is true, or if it even makes any sense. so sometimes it will generate untrue statements (with the same confidence as the 'linux gatekeepers' you mentioned, except with no comments to correct the response), no matter how well you train it. and if there's enough wrong information in the dataset, it will start repeating it in the responses, because again, its only real purpose is to pick out the next word in a string based on the training data it got. sometimes it gets things right, sometimes it doesn't, we can't just blindly trust it. pointing that out is not gatekeeping.

[–] [email protected] 1 points 2 months ago
[–] [email protected] -2 points 2 months ago (1 children)

Gatekeeping ain't bad. It keeps the trash out.

[–] [email protected] -5 points 2 months ago* (last edited 2 months ago) (1 children)
[–] [email protected] 1 points 2 months ago (1 children)
[–] [email protected] -1 points 2 months ago

You get what you give

[–] [email protected] -5 points 2 months ago* (last edited 2 months ago)

I hear this over and over but none of what you say actually matters.

It's not luck if it gives accurate and detailed answers for almost every question that actually compiles and works.

I think the difference in opinion comes down to what you use it for. In some areas I imagine it will just hallucinate. But in others, such as coding, it's often almost 100% correct and a magic tool for learning and saving soooo much time.

[–] [email protected] 8 points 2 months ago (2 children)

I'm not opposed at all to using LLMs for such purposes, however, please consider a solution that aligns with the values of GNU/Linux and the Free Software Movement. If you have sufficient RAM and a somewhat modern CPU, you can do inference on your very own machine, locally, with no connection to any external servers. And at very respectable speed.

[–] [email protected] 2 points 2 months ago (2 children)

Serious question: Can running locally be as good as ChatGPT-4?

[–] [email protected] 11 points 2 months ago

It's worth doing anyway to get a sense of how computationally intensive it is. Then consider how many people ask for the daily fart joke and you get a sense of the environmental impact.

[–] [email protected] 2 points 2 months ago* (last edited 2 months ago) (1 children)

in terms of the quality of writing you can get models from 20GB at a similar level to GPT-4 (good for creative writing but much worse if knowledge of something is required)

the model I use (~20GB) would know what rclone is but would most likely not know how to use it

EDIT: now that I think about it is was based off of some benchmark. personally I wouldn't say it performs at GPT-4 but maybe GPT-3.5

[–] [email protected] 3 points 2 months ago (1 children)

Which model is that? I tried several ones that were complete trash, then Mixtrail appeared and starting giving answers that are very basic but mostly factually correct. But none of these are even close to ChatGPT that I can rely on with writing scripts.

Don't get me wrong, I'd rather not give them my data and money if there was an alternative. But for tech stuff we're not there yet.

[–] [email protected] 1 points 2 months ago

yeah, my bad. edited the comment with more accurate info

and this does apply to creative writing, not knowledgeable stuff like coding

[–] [email protected] 1 points 2 months ago* (last edited 2 months ago)

I am actually curious about that, would I need a high powered GPU? I'm running a refurbished Dell Optiplex with a very basic video card that I added

[–] [email protected] 7 points 2 months ago (1 children)

don’t run any commands that you don’t understand. Ask it to break down any commands it tells you to run if you don’t understand them.

You need to pay extra attention to this, as ML models will spit out commands and parameters that doesn't exists if there was not enough examples in training dataset for that action. Especially with explain as it could just spit out totally wrong but "sounding good" explanation for parameter etc as it not always will tell the magic keywords like "typically" that indicate that it doesn't have confidence as it's "based on other similar command/knowledge".

In your example it spit out:

 -m: Prune empty directory chains from the file-list.
 --prune-empty-dirs: Exclude empty directories that result from the inclusion/exclusion pattern.

which is actually exactly the same parameter with 2 different explanations, you can confirm this with man rsync

 --prune-empty-dirs, -m   prune empty directory chains from file-list

So the more edge case you have the bigger chance it will spill out bad results, but those new models are shockingly good especially for very common use cases.

[–] [email protected] -4 points 2 months ago* (last edited 2 months ago)

Absolutely. And I would also add that the more critical the use case, the more work you should do to double check things. Don't rely on gpt alone if you're doing critical backups, for example. But if you just want a python program that sorts MP3s, then go ahead and give it a whirl.

[–] [email protected] 2 points 2 months ago

I'm all for it as long as you keep using your brain. Coworker of mine set something upn on AWS that wasn't working. Going through it I found the error. He said he tried it using chatgpt. He knows how to do it himself, he knows the actual mistake was a mistake, but he trusted Amazon Q when it said the mistake was correct. Even when double checking.

Trust, but verify.

I found it to be a helpful tool in your toolkit. Just like being able to write effective search queries is. Copying scripts off the internet and running them blindly is a bad idea. The same thing holds up for LLMs.

It may seem like it knows what it's talking about, but it can often talk out of its arse too...

I've personally had good results with 3.5 on the free tier. Unless you're really looking for the latest data

[–] [email protected] 2 points 2 months ago (1 children)

Someone excitedly demonstrated to me how easy it is to code with copilot. They generated a bunch of code easily. And then proceeded to debug subtle bugs for longer than it would have taken to write it yourself in the first place.

And in the end they were still left with badly structured and maintainable code.

LLMs will do exactly what Stackoverflow has done, but more efficiently: allow profileration of bad/outdated solutions to problems, and application of those with no real understanding.

More garbage code and more work for the few people who continue to actually read manuals and understand what they are doing.

[–] [email protected] 1 points 2 months ago* (last edited 2 months ago)

Perhaps, but I'm not really suggesting its use for professional programming in this post.

What it is good for is helping with simple stuff like terminal commands, learning python etc. Stuff that has a low risk profile that you're not relying on for anything too important.

[–] [email protected] 1 points 2 months ago

Is this as ad?

You could also use free LLMs, check out FMHY.

[–] [email protected] 1 points 2 months ago* (last edited 2 months ago) (1 children)

I'll confess that I only tried gpt 3.5 (and the mistral one but it was actually consistantly worse) given that there's no way in the world I'm actually giving openAI any money.

Having said that I don't think it fundamently changes the way it works. Basically I think it's fine as some sort of interactive man/stackoverflow parser. It can reduce frictions of having to read the man yourself, but I do think it could do things a lot better for new user onboarding, as you seem to suggest in the comments that it's one of the useful aspect.

Basically it should drop the whole "intelligent expert" thing and just tell you straight away where it got the info from (and actually link the bloody man pages. At the end of the day the goal is still for you te be able to maintain your own effing system). I should also learn to tell you when it actually doesn't know instead of inventing some plausible answer out of nowhere (but I guess that's a consequence of how those models work, being optimized for plausibility rather than correctness).

As for the quality of the answer, usually it's kind of good to save you from googling how to do simple one liners. For script it actually shat the bed every single time I tried it. In some instances it gave me 3 ways to do slightly different things all in the same loop. In other straight up conflicting code blocks. Maybe that part is better in GPT 4 I don't know.

It also gives you outdated answers without specifying the version of the packages it targets. Which can be really problematic.

Basically where I'm going with this is that if you're coding, or maintaining any server at all, you really should learn how to track the state of your infra (including package versions) and read man pages anyway. If you're just a user, nowadays you don't really have to get your hands in the terminal.

At the end of the day, it can be useful as some sort of interactive meta search engine that you have to double check.

I'm really not getting into the whole "automated garbage that's filling up the internet, including bug reports and pull request" debate. I do think that all things considered, those models are a net negative for the web.

[–] [email protected] -1 points 2 months ago

As an example of what's possible with GPT4. Client wanted DNS auth in Letsencrypt instead of HTTPS, so we can close incoming port 80. They're using a registar with a proprietary API. With ChatGPT I created a certbot plugin in about 10 minutes, feeding it a pdf with API description.

I know how to do every step of this myself, but it's a 4-8 hour task to research the registar's API and how certbot plugins interface. Instead, I took another 15 minutes to review the code, ran it, and it's done.

[–] [email protected] 1 points 2 months ago

I love ChatGPT. It’s an invaluable tool. It has helped me solve my problems by pointing me in the right direction significantly faster than any search engine.

[–] [email protected] 0 points 2 months ago

I constantly use it for running Linux commands that have complicated options and arguments or that I just don't remember.

[–] [email protected] 0 points 2 months ago (1 children)

You can use Copilot or Mistral Chat for pretty much the same. Copilot offers GPT-4 (or 4.5) for free, Mistral Chat is using their own models which sometimes produce better results.

[–] [email protected] 1 points 2 months ago* (last edited 2 months ago)

To be honest Microsoft restrictions made copilot extremely ineffective. I asked it to help me disable ssl verification in one of the java's http clients for testing purposes during development. It said it's something I never should do and will not give me an answer. ChatGPT restrictions are way more rational than that. Microsoft gutted the tool a lot.

[–] [email protected] 0 points 2 months ago

For myself, I'm fine with using ChatGPT and other LLMs (I've been experimenting with trying to run them locally, so that I can gain some insight on them a bit better) to "fill in the gaps", or as a sort of interactable Wikipedia - but I try to avoid asking LLMs something that I have zero knowledge of, because it then makes it a bit more difficult to verify the results it produces.

[–] [email protected] 0 points 2 months ago (1 children)

If you haven't already tried it I would also highly recommend phind.com for troubleshooting or coding questions.

Also for a nice quick access to gpt from your terminal grab "tgpt" and you can ask questions directly from your terminal.

[–] [email protected] 1 points 2 months ago

Thanks for the tip!

[–] [email protected] -1 points 2 months ago

Good tips! I do the same, also for rsync.

Ask it to explain everything, you dont need to pay to have a permanent prompt saved (here "if I ask for a command, when writing to a file always do that in one command, dont describe beforehand but explain every argument")

I think GPT 3.5 is okay, but makes some errors and has hallucinations. Sometimes the script is also wrong or has flaws.

Its basically a syntax translator, which is great!

[–] [email protected] -3 points 2 months ago* (last edited 2 months ago)

I agree. I'm learning go right now and it has answered every question with absolute excellence. It probably accelerates my learning time by 10x at least.

It's much better than searching the web, because you can't share your exact code with the web and get a detailed walkthrough of how it works.

Gpt4 of course.