AFKBRBChocolate

joined 1 year ago
[–] [email protected] 3 points 3 weeks ago

I should have mentioned what you just did: your passion doesn't have to be your job.

Tangentially, as I get closer to retirement, one of the things I hear from retirees is that they planned on doing a lot more of their hobby when they retired, but found that the hobby felt more like a job when they tried to do it all day. So sometimes it's better that you keep something you enjoy as something that you can just do when you want.

[–] [email protected] 17 points 3 weeks ago (4 children)

I end up having similar conversations with college folks (interns mostly). I usually say something along the lines of:

  • If there's something that you're so passionate about that you're going to do it regardless, it's worth taking a shot at making a living at it. Things like writing, acting, and music are really hard to to make it in, but if it's really a passion, you might as well give it a go. It's good to have a Plan B though.
  • If you aren't super passionate about something, or you don't have the starving artist mentality or whatever, next is to look at things you're good at that you don't hate, especially if there's room to grow in them. If you're good at math, for instance, you could consider being an accountant.
  • If you don't feel like you have any especially marketable skills, then you're looking for something that's more broadly available, like retail or whatever. Of you can find something that teaches a skill, that's a plus.

Broadly, there's a passion, there's a career, and there's a job. There's nothing wrong with any of those, but people tend to be happiest in that order. I personally wasn't super passionate about anything, but liked computers, got a CS degree, ended up as a software engineer at a rocket company, and now manage the software organization there. There were other things I enjoyed, but I figured programming was the most marketable, and that's worked out for me.

What people tend to like or hate the most about where they work are the people and/or the boss, and that can be good or bad pretty much anywhere. Good to watch out for red and green flags when you're looking.

[–] [email protected] 4 points 3 weeks ago

Ugh, my poor wife; I've had a number of bad experiences because I'm so fundamentally stubborn. In the dream, I won't be able to do something, and I'll work and work at it, and sometimes succeed in real life. It's been as simple and benign as not being able to see in a dream and struggling to open my eyes until I finally do, and I wake up. But I've managed to yell with a mouth that didn't completely work, so my wife woke up to what sounds like a yelling, mournful ghost. I've managed to fight and punched my wife. I've managed to run, and kicked her. In all these cases, in the dream, I've had to really struggle to do the thing before I succeed and wake myself up.

Sleep paralysis turns out to be a good thing.

[–] [email protected] 20 points 3 weeks ago (3 children)

It must be cathartic to make cartoons about an argument you want to have where the other person is silenced by your point. Most of the time, the guy on the left in this cartoon would continue to argue and reject everything you say.

As evidence, watch a video of anyone arguing with MTG.

[–] [email protected] 7 points 1 month ago (1 children)

All at once?

[–] [email protected] 2 points 1 month ago

You might have missed where I said it explained both the text to columns wizard and a formula. He used the formula, which is what he was looking for. He's a top notch software developer, he just doesn't use Excel much.

But I agree with your broader point. I keep having to remind people that the "LM" part is for "language model." It's not figuring anything out, it's distilling what an answer should look like. A great example is to ask one for a mathematical proof that isn't commonly found online - maybe something novel. In all likelihood, it's going to give you one, and it will probably look like the right kind of stuff, but it will also probably be wrong. It doesn't know math (it doesn't know anything), it just has a model of what a response should look like.

That being said, they're pretty good for a number of things. One great example is lesson plans. From what I understand, most teachers now give an LLM the coursework and ask it to generate a lesson plan. Apparently they do an excellent job and save many hours of work. Anything that involves summarizing information is good, especially as that constrains the training data.

[–] [email protected] 2 points 1 month ago (2 children)

Was having a related conversation with an employee this morning (I manage a software engineering organization). He asked an LLM how to separate the parts of a date in Excel, and got a pretty good explanation of how do it with the text to columns wizard, and also how to use a formula to get each part. He was happy because he felt it would have taken him much longer to figure it out himself.

I was saying I thought that was a good use of an LLM - it's going to give a tailored answer - but my worry is that people will do less scrubbing of an answer coming from an AI than one they saw on a forum. I said we should think of it like a tailored Google search.

For comparison, I googled "Excel formula separate parts of a date" and one of the top results was a forum discussion that had the exact solutions the LLM gave, using the same examples. On the one hand, to get it from the forum you had to wade through all the wrong answers and discussions. On the other hand, that discussion puts the answer given in the context of a bunch of others that are off the mark, and I think make people less likely to assume it's correct.

In any case, it's still just synthesizing from or regurgitating training data.

[–] [email protected] 6 points 1 month ago

One of the things I look for in employees is the ability to distill complex topics into the important elements and explain it to someone unfamiliar. Some people are just naturally good at it, and it's a really important skill for moving up a leadership chain.

[–] [email protected] 12 points 1 month ago (4 children)

I use a Fitbit flex 2 for that purpose. I doubt they make that model anymore, but probably make something similar. The flex 2 is just a little lozenge-shaped thing that tracks movement, can vibrate, and has a few little lights. The app let's you set alarms (you turn it off by tapping it). I also have mine set to vibrate when I get a text message or a phone call in case I'm someplace noisy and don't hear it.

[–] [email protected] 1 points 1 month ago (1 children)

I'm always interested in seeing examples like this where the LLM will get to a right answer after a series of questions (with no additional information) about its earlier wrong responses. I'd love to understand what's going on in the software that allows the initial wrong answers but gets the eventually right one without an additional input.

[–] [email protected] 4 points 1 month ago

Didn't forget, it's intentionally an abbreviated version of the puzzle with no wolf or cabbage (or constraints), so the man and goat can just go across. But the AI has a lot of examples in its training data and it's pulling from that.

view more: ‹ prev next ›