lvxferre

joined 5 months ago
[–] [email protected] 2 points 3 weeks ago (1 children)

I also apologise for the tone. That was a knee-jerk reaction from my part; my bad.

(In my own defence, I've been discussing this topic with tech bros, and they rather consistently invert the burden of the proof. Often to evoke Brandolini's Law. You probably know which "types" I'm talking about.)

On-topic. Given that "smart" is still an internal attribute of the blackbox, perhaps we could gauge better if those models are likely to become an existential threat by 1) what they output now, 2) what they might output in the future, and 3) what we [people] might do with it.

It's also easier to work with your example productively this way. Here's a counterpoint:


The prompt asks for eight legs, and only one pic was able to output it correctly; two ignored it, and one of the pics shows ten legs. That's 25% accuracy.

I believe that the key difference between "your" unicorn and "my" eight-legged dragon is in the training data. Unicorns are fictitious but common in popular culture, so there are lots of unicorn pictures to feed the model with; while eight-legged dragons are something that I made up, so there's no direct reference, even if you could logically combine other references (as a spider + a dragon).

So their output is strongly limited by the training data, and it doesn't seem to follow some strong logic. What they might output in the future depends on what we add in; the potential for decision taking is rather weak, as they wouldn't be able to deal with unpredictable situations. And thus their ability to go rogue.

[Note: I repeated the test with a horse instead of a dragon, within the same chat. The output was slightly less bad, confirming my hypothesis - because pics of eight-legged horses exist due to the Sleipnir.]

Neural nets

Neural networks are a different can of worms for me, as I think that they'll outlive LLMs by a huge margin, even if the current LLMs use them. However, how they'll be used is likely considerably different.

For example, current state-of-art LLMs are coded with some "semantic" supplementation near the embedding, added almost like an afterthought. However, semantics should play a central role in the design of the transformer - because what matters is not the word itself, but what it conveys.

That would be considerably closer to a general intelligence than to modern LLMs - because you're effectively demoting language processing to input/output, that might as well be subbed with something else, like pictures. In this situation I believe that the output would be far more accurate, and it could theoretically handle novel situations better. Then we could have some concerns about AI being an existential threat - because people would use this AI for decision taking, and it might output decisions that go terribly right, as in that "paperclip factory" thought experiment.

The fact that we don't see developments in this direction yet shows, for me, that it's easier said than done, and we're really far from that.

[–] [email protected] 2 points 3 weeks ago (3 children)

Chinese room, called it. Just with a dog instead.

The Chinese room experiment is about the internal process; if it thinks or not, if it simulates or knows, with a machine that passes the Turing test. My example clearly does not bother with all that, what matters here is the ability to perform the goal task.

As such, no, my example is not the Chinese room. I'm highlighting something else - that the dog will keep doing spurious associations, that will affect the outcome. Is this clear now?

Why this matters: in the topic of existential threat, it's pretty much irrelevant if the AI in question "thinks" or not. What matters is its usage in situations where it would "decide" something.

I have this debate so often, I’m going to try something a bit different. Why don’t we start by laying down how LLMs do work. If you had to explain as full as you could the algorithm we’re talking about, how would you do it?

Why don't we do the following instead: I play along your inversion of the burden of the proof once you show how it would be relevant to your implicit claim that AI [will|might] become an existential threat (from “[AI is] Not yet [an existential threat], anyway”)?


Also worth noting that you outright ignored the main claim outside spoilers tag.

[–] [email protected] 2 points 3 weeks ago (5 children)

I don't think that a different training scheme or integrating it with already existing algos would be enough. You'd need a structural change.

I'll use a silly illustration for that; it's somewhat long so I'll put it inside spoilers. (Feel free to ignore it though - it's just an illustration, the main claim is outside the spoilers tag.)

The Mad Librarian and the Good BoiLet's say that you're a librarian. And you have lots of books to sort out. So you want to teach a dog to sort books for you. Starting by sci-fi and geography books.

So you set up the training environment: a table with a sci-fi and a geography books. And you give your dog a treat every time that he puts the ball over the sci-fi book.

At the start, the dog doesn't do it. But then as you train him, he's able to do it perfectly. Great! Does the dog now recognise sci-fi and geography books? You test this out, by switching the placement of the books, and asking the dog to perform the same task; now he's putting the ball over the history book. Nope - he doesn't know how to tell sci-fi and geography books apart, you were "leaking" the answer by the placement of the books.

Now you repeat the training with a random position for the books. Eventually after a lot of training the dog is able to put the ball over the sci-fi book, regardless of position. Now the dog recognises sci-fi books, right? Nope - he's identifying books by the smell.

To fix that you try again, with new versions of the books. Now he's identifying the colour; the geography book has the same grey/purple hue as grass (from a dog PoV), the sci book is black like the neighbour's cat. The dog would happily put the ball over the neighbour's cat and ask "where's my treat, human???" if the cat allowed it.

Needs more books. You assemble a plethora of geo and sci-fi books. Since typically tend to be dark, and the geo books tend to have nature on their covers, the dog is able to place the ball over the sci-fi books 70% of the time. Eventually you give up and say that the 30% error is the dog "hallucinating".

We might argue that, by now, the dog should be "just a step away" from recognising books by topic. But we're just fooling ourselves, the dog is finding a bunch of orthogonal (like the smell) and diagonal (like the colour) patterns. What the dog is doing is still somewhat useful, but it won't go much past that.

And, even if you and the dog lived forever (denying St. Peter the chance to tell him "you weren't a good boy. You were the best boy."), and spend most of your time with that training routine, his little brain won't be able to create the associations necessary to actually identify a book by the topic, such as the content.

I think that what happens with LLMs is a lot like that. With a key difference - dogs are considerably smarter than even state-of-art LLMs, even if they're unable to speak.

At the end of the day LLMs are complex algorithms associating pieces of words, based on statistical inference. This is useful, and you might even see some emergent behaviour - but they don't "know" stuff, and this is trivial to show, as they fail to perform simple logic even with pieces of info that they're able to reliably output. Different training and/or algo might change the info that it's outputting, but they won't "magically" go past that.

[–] [email protected] 2 points 3 weeks ago (7 children)

I'm reading your comment as "[AI is] Not yet [an existential threat], anyway". If that's inaccurate, please clarify, OK?

With that reading in mind: I don't think that the current developments in machine "learning" lead towards the direction of some hypothetical system that would be an existential threat. The closest to that would be the subset of generative models, that looks like a tech dead end - sure, it might see some applications, but I don't think that it'll progress much past the current state.

In other words I believe that the AI that would be an existential threat would be nothing like what's being created and overhyped now.

[–] [email protected] 2 points 3 weeks ago* (last edited 3 weeks ago)

Yup, it is a real risk. But on a lighter side, it's a risk that we [humanity] have been fighting against since forever - the possibility of some of us causing harm to the others not due to malice, but out of assumptiveness and similar character flaws. (In this case: "I assume that the AI is reliable enough for this task.")

[–] [email protected] 7 points 3 weeks ago* (last edited 3 weeks ago) (1 children)

Habsburg-AI? Do you have an idea on how much you made me laugh in real life with this expression??? It's just... perfect! Model degeneration is a lot like what happened with the Habsburg family's genetic pool.

When it comes to hallucinations in general, I got another analogy: someone trying to use a screwdriver with nails, failing, and calling it a hallucination. In other words I don't think that the models are misbehaving, they're simply behaving as expected, and that any "improvement" in this regard is basically a band-aid being added to humans to a procedure that doesn't yield a lot of useful outputs to begin with.

And that reinforces the point from your last paragraph - those people genuinely believe that, if you feed enough data into a L"L"M, it'll "magically" become smart. It won't, just like 70kg of bees won't "magically" think as well as a human being would. The underlying process is "dumb".

[–] [email protected] 42 points 3 weeks ago* (last edited 3 weeks ago) (11 children)

May I be blunt? I estimate that 70% of all OpenAI and 70% of all "insiders" are full of crap.

What people are calling nowadays "AI" is not a magic solution for everything. It is not an existential threat either. The main risks that I see associated with it are:

  1. Assumptive people taking LLM output for granted, to disastrous outcomes. Think on "yes, you can safely mix bleach and ammonia" tier (note: made up example).
  2. Supply and demand. Generative models have awful output, but sometimes "awful" = "good enough".
  3. Heavy increase in energy and resources consumption.

None of those issues was created by machine "learning", it's just that it synergises with them.

[–] [email protected] 31 points 4 weeks ago (2 children)

The problem is considerably smaller if you consider that the software is used by a lot more people than English speakers (both L1 and L2+). For these, "gimp" is not some sex stuff, but rather that critter chewing on a brush. And even for L2+, the word "gimp" is often missing from our vocabs.

As others said in this thread, the actual problem holding GIMP back is called user interface. It has improved, but it's still awful.

[–] [email protected] 4 points 4 weeks ago* (last edited 4 weeks ago)

I've been thinking on the original of this sentence (Aristotle's Nichomachean Ethics, book IX) for a bit. I'll copy the relevant excerpt:

That moral virtue is a mean, then, and in what sense it is so, and that it is a mean between two vices, the one involving excess, the other deficiency, and that it is such because its character is to aim at what is intermediate in passions and in actions, has been sufficiently stated. Hence also it is no easy task to be good. For in everything it is no easy task to find the middle, e.g. to find the middle of a circle is not for every one but for him who knows; so, too, any one can get angry- that is easy- or give or spend money; but to do this to the right person, to the right extent, at the right time, with the right motive, and in the right way, that is not for every one, nor is it easy; wherefore goodness is both rare and laudable and noble.

I might not agree with his "middle ground" reasoning (I think that it's simplistic) but I agree with his conclusion - to express anger can be good as long as you do it without misdirecting it, overdoing it, doing it when it doesn't matter, doing it for spurious reasons, or doing it non-constructively.

[–] [email protected] 3 points 1 month ago

Sorry beforehand for the long reply.

Initially, one of .ml's admins (who's also a Lemmy developer) manually excluded ani.social from the list of instances in the join-lemmy site, and defederated it from .ml. When requested to revert the change, he falsely claimed that the instance is "full of CSAM". Eventually, the other .ml admin + Lemmy dev reviewed the "evidence" brought by the first one, concluded "there's no CSAM" here, and reverted that change.

They kept ani.social defederated, but that's fine - .ml is strictly SFW, there's some NSFW content in ani.social, so it's consistent.

Some time goes by, and a user creates a thread about "Mahou Shoujo something" in the !anime .ml community. I don't like that series; but more importantly it is NSFW, so the discussion was removed by a third .ml admin (not a dev).

Then we (a few users, incl. me) started discussing the eventual migration of the comm to ani.social. Because we knew that issues like this would keep happening, it was the best for both sides. With those first and third admins finding low-hanging fruits to wreck the discussion across multiple threads, such as "it lists to a pedo instance" or "doxxing" people. Claims that are blatantly knowingly false, because:

  • ani.social was linked in the sidebar of !anime@lemmy ml for ages, and the local admins never bothered with it. But "suddenly" it becomes an issue, concomitantly with people discussing the migration of a comm to another instance?
  • one of the people discussing the migration brought the contradiction above to the admins' attention. And yet the link stayed there, even if the admins were in a position to change it. Showing that no, linking ani.social was not the real issue that prompted the removal of the discussion, but the discussion about emigrating from that instance.
  • In no moment, the people talking about the admin actions referred to personally identifiable information, like "you're John Smith"; we solely associated the administrative actions with the usernames. And that was done in a neutral tone, with zero harassment from my knowledge. (Relevant tidbit: both admins clearly use pseudonyms.)
  • To add injury, the third admin in question was grasping at straws to defend the necessity of an anime community in an instance about open source and privacy, in a way not too unlike spez' "I'm one of you! We snoos stand together!" babble.

From public PoV, the matter ends here: you have the .ml admin team enforcing hidden rules and taking users as cattle to be herded. From my PoV, it gets worse.

I used to moderate a large-ish comm there, called !snoocalypse, about Reddit's downfall. In that comm, users (including me, the mod) were consistently saying stuff like "Steve Huffman the greedy pigboy". And in no moment the .ml admins took action against it, or even contacted me to say "hey mod, don't let your users do that".

So, naming someone by their RL name to call him a "greedy pigboy" is not doxxing. But stating which admin took which action by their username, in a neutral way, is suddenly doxxing??? And there's no way that the admins never saw it, because they were often removing content there.

Of course, the content that they were removing was from another nature: posts criticising either the Russian Federation or the People's Republic of China, typically under the allegations that violated rules #1 and #2 (basically: bigotry and making people feel unwelcome, or something like this).

Don't get me wrong, my issue is not that they were removing that criticism. I probably wouldn't bat an eye if they had some written rule like "don't criticise the RF or the PRC here"; I do criticise both but I'd see it within their rights. My issue here is to distort what others users say to fit the rules being listed, in order to enforce some rule not being listed, that is literally Reddit admins tier behaviour.

[–] [email protected] 24 points 1 month ago

I agree with the move; it reduces the unnecessary waste of time, space, and material. While some things should have physical copies, not everything needs to.

Regarding the "AI" part: the author is simply highlighting that BRD is sticking to really old technology, in a world going further steps beyond. Don't think too hard on that.

[–] [email protected] 4 points 1 month ago

Frankly I also like the original better. It seems more reasonable, less like "it's impossible" and more like "it's really hard".

view more: ‹ prev next ›