this post was submitted on 24 May 2024
384 points (98.0% liked)

Programmer Humor

31214 readers
121 users here now

Post funny things about programming here! (Or just rant about your favourite programming language.)

Rules:

founded 4 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 31 points 1 month ago (9 children)

Good luck connecting to each of the 36 pods and grepping the file over and over again

[–] [email protected] 11 points 1 month ago* (last edited 1 month ago)

for X in $(seq -f host%02g 1 9); do echo $X; ssh -q $X “grep the shit”; done

:)

But yeah fair, I do actually use a big data stack for log monitoring and searching… it’s just way more usable haha

[–] [email protected] 9 points 1 month ago

Just write a bash script to loop over them.

[–] [email protected] 8 points 1 month ago

You can run the logs command against a label so it will match all 36 pods

[–] [email protected] 6 points 1 month ago

Stern has been around for ever. You could also just use a shared label selector with kubectl logs and then grep from there. You make it sound difficult if not impossible, but it's not. Combine it with egrep and you can pretty much do anything you want right there on the CLI

[–] [email protected] 5 points 1 month ago (1 children)

I don't know how k8s works; but if there is a way to execute just one command in a container and then exit out of it like chroot; wouldn't it be possible to just use xargs with a list of the container names?

[–] [email protected] 9 points 1 month ago

yeah, just use kubectl and pipe stuff around with bash to make it work, pretty easy

[–] [email protected] 4 points 1 month ago

This is what I was thinking. And you can't really graph out things over time on a graph which is really critical for a lot of workflows.

I get that Splunk and Elastic or unwieldy beasts that take way too much maintenance for what they provide for many orgs but to think grep is replacement is kinda crazy.

[–] [email protected] 4 points 1 month ago* (last edited 1 month ago) (1 children)

Let me introduce you to syslogd.

But well, it's probably overkill, and you almost certainly just need to log on a shared volume.

[–] [email protected] 1 points 1 month ago

Syslog isn't really overkill IMO. It's pretty easy to configure it to log to a remote server, and to split particular log types or sources into different files. It's a decent abstraction - your app that logs to syslog doesn't have to know where the logs are going.

[–] [email protected] 4 points 1 month ago* (last edited 1 month ago)

Since you are talking about pods, you are obviously emitting all your logs on stdout and stderr, and you have of course also labeled your pods nicely, so grepping all 36 gods is as easy as kubectl logs -l <label-key>=<label-value> | grep <search-term>

[–] [email protected] 3 points 1 month ago

That's why tmux has synchronize-panes!