this post was submitted on 05 Jun 2024
94 points (100.0% liked)

Technology

37343 readers
118 users here now

Rumors, happenings, and innovations in the technology sphere. If it's technological news or discussion of technology, it probably belongs here.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 46 points 3 weeks ago (5 children)

I feel this is all just a scam, trying to drive the value of AI stocks. Noone in the media seems to talk about the hallucination problem, the problem with limited data for new models (Habsburg-AI), the energy restrictions etc.

It’s all uncritical believe that „AI“ will just become smart eventually. This technology is built upon a hype, it is nothing more than that. There are limitations, and they reached them.

[–] [email protected] 19 points 3 weeks ago

AI bros are just NFT bros with an actual product.

[–] [email protected] 10 points 3 weeks ago (1 children)

And these current LLMs aren’t just gonna find sentience for themselves. Sure they’ll pass a Turing test but they aren’t alive lol

[–] [email protected] 14 points 3 weeks ago

I think the issue is not wether it's sentient or not, it's how much agency you give it to control stuff.

Even before the AI craze this was an issue. Imagine if you were to create an automatic turret that kills living beings on sight, you would have to make sure you add a kill switch or you yourself wouldn't be able to turn it off anymore without getting shot.

The scary part is that the more complex and adaptive these systems become, the more difficult it can be to stop them once they are in autonomous mode. I think large language models are just another step in that complexity.

An atomic bomb doesn't pass a Turing test, but it's a fucking scary thing nonetheless.

[–] [email protected] 7 points 3 weeks ago* (last edited 3 weeks ago) (1 children)

Habsburg-AI? Do you have an idea on how much you made me laugh in real life with this expression??? It's just... perfect! Model degeneration is a lot like what happened with the Habsburg family's genetic pool.

When it comes to hallucinations in general, I got another analogy: someone trying to use a screwdriver with nails, failing, and calling it a hallucination. In other words I don't think that the models are misbehaving, they're simply behaving as expected, and that any "improvement" in this regard is basically a band-aid being added to humans to a procedure that doesn't yield a lot of useful outputs to begin with.

And that reinforces the point from your last paragraph - those people genuinely believe that, if you feed enough data into a L"L"M, it'll "magically" become smart. It won't, just like 70kg of bees won't "magically" think as well as a human being would. The underlying process is "dumb".

[–] [email protected] 2 points 3 weeks ago

I am glad you liked it. Can’t take the credit for this one though, I first heard it from Ed Zitron in his podcast „Better Offline“. Highly recommend.

[–] [email protected] 5 points 3 weeks ago

Energy restrictions actually could be pretty easily worked around using analog converting methods. Otherwise I agree completely though, and what's the point of using energy on useless tools. There's so many great things that AI is and can be used for, but of course like anything exploitable whatever is "for the people" is some amalgamation of extracting our dollars.

The funny part to me is that like mentioned "beautiful" AI cabins that are clearly fake -- there's this weird dichotomy of people just not caring/too ignorant to notice the poor details, but at the same time so many generative AI tools are specifically being used to remove imperfection during the editing process. And that in itself is something that's too bad, I'm definitely guilty of aiming for "the perfect composition" but sometimes nature and timing forces your hand which makes the piece ephemeral in a unique way. Shadows are going to exist, background subjects are going to exist.

The current state of marketed AI is selling the promise of perfection, something that's been getting sold for years already. Just now it's far easier to pump out scam material with these tools, something that gets easier with each advancement in these sorts of technologies, and now with more environmental harm than just a victim of a predator.

It really sucks being an optimist sometimes.

[–] [email protected] 2 points 3 weeks ago

It could be only hype. But I don't entirely agree. Personally, I believe we are only a few years away from AGI. Will it come from OpenAI and LLMs? Maybe, but it will likely come from something completely different. Like it or not, we are within spitting distance of a true Artificial Intelligence, and it will shake the foundations of the world.