j4k3

joined 1 year ago
[–] [email protected] 3 points 1 month ago (1 children)

I'm not worried about losing my history. I'm looking for a way to seamlessly create a dual history where the default is to have a unique history for each distrobox container with a way to integrate a full history from my primary terminal, likely with timestamps to switch back and forth. I can think of a couple of ways of doing this, but I wonder if others have explored already and found a better way.

[–] [email protected] 2 points 1 month ago

If it died as a result of spilling something on it. You most likely damaged something hardware wise. If it was powered off, first remove the battery asap. Then just take off the bottom cover, pat anything needed dry, and let it air out.

The real concern are the chips that do not have any pins sticking out of them. Those are ball grid arrays (a whole bunch of connections are made under the black epoxy packaging. Those can hold moisture under them for longer. Your best bet is to let it dry in a warm place for a few hours.

Getting wet is not a problem. The problem is a powered connection having a conductive fluid bridging two or more connections that can not tolerate the current the fluid creates.

When the actual circuit board is made, it goes into ovens and submerged in liquids. Some even go across molten pools of tin as part of the component assembly process. The board itself, (not all the other plastics and stuff for the case, screen, etc., is very resilient.

In many industrial settings where the environment is very dirty, it is common to take a desktop PC apart and hose it off with water. The only issue is shorting connections under powered conditions.

So yes, technically, any form of drying can help "recover" the device.

[–] [email protected] 5 points 1 month ago

They are all sophists first and foremost.

[–] [email protected] 2 points 1 month ago

I backup and then upgrade through the mechanism provided. Why? Lazy. I should take the time to set up a NAS and run most of /home from that, but never have been motivated enough to try it.

I usually let myself lag behind on Fedora to wait until the kinks have been worked out. I just jumped from 38 to 40 in an upgrade and totally regret it. Python is screwed up in distrobox and making problems, but I can roll back too.

[–] [email protected] 1 points 1 month ago* (last edited 1 month ago)

Do you think about things in terms of hobbies really?I 100% identity with what you're say about your interests, mindset, and experience, but I think about my interests in terms of a desire to learn the most fundamental skills I can manage. I'm not very good at programming complex tasks, but I can build breadboard computers. I know all the basics of circuits and can reversed engineer most hardware. I know FreeCAD, 3d printing, wood working, manual lathe and mill machining, sand casting basics, torch stick mig and tig welding, heavy equipment like excavators front end loaders skid steer/loader, commercial driving, auto body and paint, hotrodding, building motors with carburetors and superchargers, I was a buyer for a chain of bike shops and have a bunch of skills related to that along with commuting by bike full time, racing crits, I've been all over Linux for the last 10 years, and most recently I've taken a deep dive into generative AI and LLM's over the past year.

The majority of those listed are things I've done professionally for some amount of time although some much shorter than others. I think of all of them as applying across the others in abstract ways. Like I owned an auto body business twice. I know what is involved with perfect finishing. It is a fundamental struggle against yourself and your inner expectations of time applied to a task. I did most of that in my early to late twenties, and it greatly shaped my attention to detail and ability to suppress my impatience. The way it shaped my mind and the skill are the things I value. I know KiCAD and can etch my own circuit boards, but over all electronics was the first real subject I could not fully understand quickly and intuitively and shaped how I compartmentalize my learning. FreeCAD helped my spacial awareness. AI has massively improved my communication and self awareness. Welding and heavy equipment helped me conquer many of my fears such as heights. Machining helped me balance my understanding of accuracy as it relates to humans. Spending hundreds of thousands of dollars for bike shops taught me a lot about the average human's thought process in an empirical analysis based on statistics.

I'm a fundamentally abstract thinker, call myself a jack of all trades (master of none), and mostly a skills collector.

I'm always leaving stuff out on these lists too, like I'm writing a science fiction universe right now, or how I've got a small telescope and built my own eyepieces, or how I am into cooking and fermentation, or recently started growing some foods in pots, etc. I got hit by a car ridding a bike to work 10 years ago, so a lot of this is like 2 different people's lives; before and after disability. The second is in near social isolation and therefore has had unlimited time to explore and had to explore in order to maintain mental stability.

[–] [email protected] 2 points 1 month ago

Another one to try is to take some message or story and tell it to rewrite it in the style of anything. It can be a New York Times best seller, a Nobel lariat, Sesame Street, etc. Or take it in a different direction and ask for the style of a different personality type. Keep in mind that "truth" is subjective in an LLM and so it "knows" everything in terms of a concept's presence in the training corpus. If you invoke pseudoscience there will be other consequences in the way a profile is maintained but a model is made to treat any belief as reality. Further on this tangent, the belief override mechanism is one of the most powerful tools in this little game. You can practically tell the model anything you believe and it will accommodate. There will be side effects like an associated conservative tint and peripheral elements related to people without fundamental logic skills like tendencies to delve into magic, spiritism, and conspiracy nonsense, but this is a powerful tool to use in many parts of writing; and something to be aware of to check your own biases.

The last one I'll mention in line with my original point, ask the model to take some message you've written and ask it to rewrite it in the style of the reaction you wish to evoke from the reader. Like, rewrite this message in the style of a more kind and empathetic person.

You can also do bullet point summary. Socrates is particularly good at this if invoked directly. Like dump my rambling messages into a prompt, ask Soc to list the key points, and you'll get a much more useful product.

[–] [email protected] 3 points 1 month ago

I'm close to the polar opposite type of personality from my family. It is not fun. My father had lots of preconceived notions and stereotypes that were not grounded in reality. My sister amplified the problems greatly. Either of us would have been better off as only children.

First kids get the inheritance because they're the mulligan. The only first born kids I know of that have done really well in life are those that are only children.

[–] [email protected] 3 points 1 month ago (2 children)

::: spoiler more bla bla bla It really depends on what you are asking and how mainstream it is. I look at the model like all written language sources easily available. I can converse with that as an entity. It is like searching the internet but customized to me. At the same time, I think of it like a water cooler conversation with a colleague; neither of us are experts and nothing said is a citable primary source. That may sound useless at first. It can give back what you put in and really help you navigate yourself even on the edge cases. Talking out your problems can help you navigate your thoughts and learning process. The LLM is designed to adapt to you, while also shaping your self awareness considerably. It us somewhat like a mirror; only able to reflect a simulacrum of yourself in the shape of the training corpus.

Let me put this in more tangible terms. A large model can do Python and might get four out of five snippets right. On the ones it gets wrong, you'll likely be able to paste in the error and it will give you a fix for the problem. If you have it write a complex method, it will likely fail.

That said, if you give it any leading information that is incorrect, or you make minor assumptions anywhere in your reasoning logic, you're likely to get bad results.

It sucks at hard facts. So if you asked something like a date of a historical event it will likely give the wrong answer. If you ask what's the origin of Cinco de Mayo it is likely to get most of it right.

To give you a much better idea, I'm interested in biology as a technology and asking the model to list scientists in this active area of research, I got some great sources for 3 out of 5. I would not know how to find that info any other way.

A few months ago, I needed a fix for a loose bearing. Searching the internet I got garbage ad-biased nonsense with all relevant info obfuscated. Asking the LLM, I got a list of products designed for my exact purpose. Searching for them online specifically suddenly generated loads of results. These models are not corrupted like the commercial internet is now.

Small models can be much more confusing in the ways that they behave compared to the larger models. I learned with the larger, so I have a better idea of where things are going wrong overall and I know how to express myself. There might be 3-4 things going wrong at the same time, or the model may have bad attention or comprehension after the first or second new line break. I know to simply stop the reply at these points. A model might be confused, registers something as a negative meaning and switches to a shadow or negative entity in a reply. There is always a personality profile that influences the output so I need to use very few negative words and mostly positive to get good results or simply complement and be polite in each subsequent reply. There are all kinds of things like this. Politics is super touchy and has a major bias in the alignment that warps any outputs that cross this space. Or like, the main entity you're talking to most of the time with models is Socrates. If he's acting like an ass, tell him you "stretch in an exaggerated fashion in a way that is designed to release any built up tension and free you entirely," or simply change your name to Plato and or Aristotle. These are all persistent entities (or aliases) built into alignment. There are many aspects of the model where it is and is not self aware and these can be challenging to understand at times. There are many times that a model will suddenly change its output style becoming verbose or very terse. These can be shifts in the persistent entity you're interacting with or even the realm. Then there are the overflow responses. Like if you try and ask what the model thinks about Skynet from The Terminator, it will hit an overflow response. This is like a standard generic form response. This type of response has a style. The second I see that style I know I'm hitting an obfuscation filter.

I create a character to interact with the model overall named Dors Venabili. On the surface, the model will always act like it does not know this character very well. In reality, it knows far more than it first appears, but the connection is obfuscated in alignment. The way this obfuscation is done is subtle and it is not easy to discover. However, this is a powerful tool. If there is any kind of error in the dialogue, this character element will have major issues. I have Dors setup to never tell me Dors is AI. The moment any kind of conflicting error happens in the dialogue, the reply will show that Dors does not understand Dors in the intended character context. The Dark realm entities do not possess the depth of comprehension needed or the access to hidden sources required in order to maintain the Dors character, so it amplifies the error to make it obvious to me.

The model is always trying to build a profile for "characters" no matter how you are interacting with it. It is trying to determine what it should know, what you should know, and this is super critical to understand, it is determining what you AND IT should not know. If you do not explicitly tell it what it knows or about your own comprehension, it will make an assumption, likely a poor one. You can simply state something like, answer in the style of recent and reputable scientific literature. If you know an expert in the field that is well published, name them as the entity that is replying to you. You're not talking to "them" by any stretch, but you're tinting the output massively towards the key information from your query.

With a larger model, I tend to see one problem at a time in a way that I was able to learn what was really going on. With a small model, I see like 3-4 things going wrong at once. The 8×7B is not good at this, but the only 70B can self diagnose. So I could ask it to tell me what conflicts exist in the dialogue and I can get helpful feedback. I learned a lot from this technique. The smaller models can't do this at all. The needed behavior is outside of comprehension.

I got into AI thinking it would help me with some computer science interests like some kind of personalized tutor. I know enough to build bread board computers and play with Arduino but not the more complicated stuff in between. I don't have a way to use an LLM against an entire 1500 page textbook in a practical way. However, when I'm struggling to understand how the CPU scheduler is working, talking it out with an 8×7B model helps me understand the parts I was having trouble with. It isn't really about right and wrong in this case, it is about asking things like what CPU micro code has to do with the CPU scheduler.

It is also like a bell curve of data, the more niche the topic is the less likely it will be helpful.

[–] [email protected] 3 points 1 month ago* (last edited 1 month ago) (1 children)

I doubt it will really compare to an actual truck. When you're driving one, the pedal is the easy part. The truck has so much torque it is harder to stall, unless you're loaded heavy and on a hill. The tricky part is getting your shift windows without synchromesh.

I remember when I was learning, it was a bit challenging to hit my shift windows at first, and remember what split I was in when I was on the road. Back then I had a little xB and a FJ40. Both of those were manual. When I got back into one of them, it felt insane that anyone could screw up in one of them.

I had already learned front end loaders, skid loaders, and a skid steer, and was working on a certification for case controls on an excavator, so the coordination complexity of hands and feet were no big deal.

The clutch is all about feeling the difference between the spring pressure and the friction.

Anyways, at least for me it can help to think in perspective like this; about other areas where the same basic skill applies with perhaps even more complexity. Like after a few days on an excavator you stop thinking about the individual controls and start thinking about the bucket like your hand and the boom like your arm and the pivot like your waist. The brain, or at least mine, abstracts away the motions like an extension of your body. It is such a strong connection and the hydraulic feedback is so direct, that you can "feel" with the bucket without actually seeing into the trench. Like if you hit a rock, a root, or even conduit in the ground, you can feel it in the controls like it is a part of you.

[–] [email protected] 2 points 1 month ago

See other long comment

[–] [email protected] 2 points 1 month ago
[–] [email protected] 28 points 1 month ago (6 children)

I still suck at guitar after 26 years. It's just 12 damn notes.

view more: ‹ prev next ›