[-] [email protected] 6 points 9 months ago

I've run into this in Debian. Not sure what to tell you -- the base repo does not have an explicit contract that everything in it uses the same version of all available software.

[-] [email protected] 11 points 9 months ago

Not really; they will try to automatically download dependencies, but they don't provide the application with resolution to the correct dependency. So upgrading libssl for one dependency could still break another.

[-] [email protected] 44 points 9 months ago* (last edited 9 months ago)

It benefits the end-user.

People do not want to be in dependency resolution hell; where they have three programs that all use different versions of libssl and require them to install all of them properly and point each application to the correct one. Most users have no ability to resolve problems like that. By not bundling, the application developer is forcing them to either try anyway or just not install their software.

Bundling dependencies with Flatpak or Snap helps the end user at the cost of only a few extra megabytes of space, which most users have in abundance anyway.

[-] [email protected] 49 points 9 months ago

Also: "No diet I've ever tried works!"

[-] [email protected] 36 points 9 months ago

noo you don't understand it's going to the moon noooo hodl hodl hodl nooooooooo plz buy my crypto thank you :(

[-] [email protected] 80 points 9 months ago

I too want a post-scarcity luxury space communism utopia. Unfortunately most iterations of communism feel more like rearranging deck chairs on the Titanic than actually plugging the hole in the fuselage.

[-] [email protected] 1 points 9 months ago

It's entirely not irrelevant. Even if you create a program to evolve pong, that was also designed by a human. As a computer programmer you should know that no computer program will just become pong, what an idiotic idea.

You just keep pivoting away from how you were using words to them meaning something entirely different; this entire argument is worthless. At least LLMs don't change the definitions of the words they use as they use them.

[-] [email protected] 1 points 9 months ago

I’m giving up here but evolution did not “design” us. LLMs are designs and created with a purpose in mind and they fulfill that purpose. Humans were not designed.

[-] [email protected] 1 points 9 months ago

If you truly believe humans are simply autocompletion engines then I just don't know what to tell you. I think most reasonable people would disagree with you.

Humans have actual thoughts and emotions; LLMs do not. The neural networks that LLMs use, while based conceptually in biological neural networks, are not biological neural networks. It is not a difference of complexity, but of kind.

Additionally, no matter how many statistics, CPU power, or data you give an LLM, it will not develop cognition because it is not designed to mimic cognition. It is designed to link words together. It does that and nothing more.

A dog is more sentient than an LLM in the same way that a human is more sentient than a toaster.

[-] [email protected] 1 points 9 months ago

LLMs do not "teach," and that is why learning from them is dangerous. They synthesize words and return other words, but they do not understand the content presented to them in any sense. Because of this, there is the chance that they are simply spouting bullshit.

Learn from them if you like, but remember they are absolutely no substitute for a human, and basically everything they tell you must be checked for correctness.

[-] [email protected] 1 points 9 months ago

Lol... come on. Your second source disagrees with your assertion:

Via all three analyses, we provide evidence that alleged emergent abilities evaporate with different metrics or with better statistics, and may not be a fundamental property of scaling AI models.

You are wrong and it is quite settled. Read more, including the very sources you're trying to recommend others read.

[-] [email protected] 1 points 9 months ago

The two types of loops you equivocate are totally different; saying that a computer executing a program, and an animal living, are actually the same, is very silly indeed. Like, air currents have a "core loop" of blowing around a lot but no one says that they're intelligent or that they're like computer programs or humans.

You’ve ignored my main complaint. I said that you treat LLMs and humans at different levels of abstraction:

No; you are analogizing them but losing sense of their differences in the process. I am not abstracting LLMs. That is all they do. That is what they were designed to do and what they accomplish.

You are drawing a comparison between a process humans have that generates consciousness, and literally the entirety of an LLM's existence. There is nothing else to an LLM. Whereas if you say "well a human is basically just bouncing electro-chemical signals between neurons and moving muscles" people (like me) would rightly say you were missing the forest for the trees.

The "trees" for an LLM are their neural networks and word vectors. The forest is a word prediction algorithm. There is no higher level to what they do.

146
submitted 9 months ago by [email protected] to c/[email protected]
0
submitted 1 year ago by [email protected] to c/[email protected]

I shared my fly.io Lemmy project to Hacker News and one of the first responses was “Lemmy is for tankies.”

This isn’t the first time I’d heard this either. In leaving Reddit and researching alternatives I’d heard it a fair amount.

I think it’s basically untrue; beehaw doesn’t even federate with the big tankie servers. But that seems to require some understanding of what Lemmy actually is.

So… what’s the best way to talk about it and/or get around this optics issue?

1
Mastodon? (lib.lgbt)
submitted 1 year ago by [email protected] to c/[email protected]

I'm pretty new to the whole Lemmy thing, but I figured I'd ask peoples' opinions on whether it's worth it to get into Mastodon too now that I'm officially a member of the Fediverse.

Is it active? Is it worth it? Have you had good experiences there?

view more: next ›

Veraticus

joined 1 year ago