879
Long Cow is coming (lemmy.world)
submitted 2 weeks ago by [email protected] to c/[email protected]
you are viewing a single comment's thread
view the rest of the comments
[-] [email protected] 74 points 2 weeks ago

The danger isn't that it's smart, the danger is that it's stupid.

[-] [email protected] 41 points 2 weeks ago

There's an idea about "autistic ai" or something where you give ai an objective like "get a person from point a to b as fast as you can" and the ai goes so fast the g force kills the person but the ai thinks it was a success because you never told it to keep the person alive.

Though I suppose that's more human error. Something we take as a given but a machine will not.

[-] [email protected] 14 points 2 weeks ago

Here's the thing: what they keep calling 'AI' isn't really 'artificial intelligence' at all. It's just language processing on a large scale. This type of software has no actual cognitive capability; it can't 'think', it has no capacity to 'think' at all, but they've written it so it gives the appearance of 'thinking'; it's a trick, it's fake.

[-] [email protected] 21 points 2 weeks ago

That's specifically LLMs. Image recognition like OP has nothing to do with language processing. Then there's generative AI which needs some kind of mapping between prompts and weights, but is also a completely different type of "AI"

That doesn't mean any of these "AI" products can think, but don't conflate LLMs and AI as being the same

[-] [email protected] 5 points 2 weeks ago
[-] [email protected] 8 points 2 weeks ago

Your brain is also "just a Chinese room". It's just physic, chemistry and biology. There is no magic inside your brain. If a "Chinese room" is fast enough and can fool everyone into "believing" that it's fluent in chinese, than the room speaks chinese.

[-] [email protected] 3 points 2 weeks ago

This fails to engage with the thought experiment. The question isn't if "the room is fluent in Chinese." It is whether the machine learning model is actually comparable to the person in the room, executing program instructions to turn input into output without ever understanding anything about the input or output.

[-] [email protected] 2 points 2 weeks ago* (last edited 2 weeks ago)

The same is true for your brain. Show me the neurons that are fluent in Chinese. Of course the LLM is just executing code. And if we have AGI it will also just be "executing code" but so does your brain. It's not exactly code (but maye AGI will be analog computers, so not exactly code either) but the laws of physics dictate what your brain does. The laws of physics don't understand Chinese, the atoms and molecules don't understand Chinese. "Understanding Chinese" is an emergent property.

Think about it that way: Assume every person you know (execpt you) is just some form of Chinese Room ... You first of all couldn't prove that and second it wouldn't matter at all.

[-] [email protected] 0 points 2 weeks ago

We aren't trying to establish that neurons are conscious. The thought experiment presupposes that there is a consciousness, something capable of understanding, in the room. But there is no understanding because of the circumstances of the room. This demonstrates that the appearance of understanding cannot confirm the presence of understanding. The thought experiment can't be formulated without a prior concept of what it means for a human consciousness to understand something, so I'm not sure it makes sense to say a human mind "is a Chinese room." Anyway, the fact that a human mind can understand anything is established by completely different lines of thought.

[-] [email protected] 1 points 2 weeks ago

The problem here is that intelligence is a beetle

[-] [email protected] 2 points 2 weeks ago

How can you know the system has no cognitive capability ? We haven't solved the problem for our own minds, we have no definition of what consciousness is. For all we know we might be a multimodal LLM ourselves.

[-] [email protected] -1 points 2 weeks ago

If we can't even begin to understand how a biological brain like ours produces the phenomenon of 'thought' and 'consciousness', then how the fuck can you build machines and write software that does those things? Rhetorical question, we can't, full stop. All we've got is fakery, the illusion of 'thinking', ersatz, not the real thing.

For fuck's sake, I go round and round with people on this shit every fucking time because everyone believes the hype and are never told the facts. They watch TV shows and movies and think someone made that real. They take for granted what their brains can do naturally and effortlessly (..well, not so effortlessly in too many peoples case) and knowing nothing about software or hardware think it's trivial to make machines that can do what their own brain can do. It. Is. Not.

[-] [email protected] -2 points 2 weeks ago* (last edited 2 weeks ago)

Language processing is a cognitive capability. You're just saying it's not AI because it isn't as smart as HAL 9000 and Cortana. You're getting your understanding of computer science from movies and video games.

[-] [email protected] 0 points 2 weeks ago

No, moron, I'm NOT. Go talk to neuroscientists; that's what I did. They'll tell you: an amoeba has more cognitive capability than the best of this crapware.

YOU get your """AI""" information from media hype, who gets it from AI company marketing departments, who are told: "Sell this crap we created so we can get paid".

You're dumb. You're so dumb that you can't understand when someone who is actually smart tells you something, so you think they're dumb. Get yourself a dog, name it 'Clue', so you'll always have one.

[-] [email protected] 12 points 2 weeks ago

It's called the AI alignment problem, it's fascinating, if you want to dig deeper in the subject I highly recommend 'Robert miles AI safety' channel on YouTube

[-] [email protected] 3 points 2 weeks ago

Computers do what people tell them to do, not what people want.

[-] [email protected] 3 points 2 weeks ago

I read about a military AI that would put its objectives before anything else (like casualties) and do things like select nuclear strikes for all missions that involved destruction of targets. So they adjusted it to allow a human operator to veto strategies, in the simulation this was done via a communications tower. The AI apparently figured out that it could pick the strategy it wanted without veto if it just destroyed the communications tower before it made that selection.

Though take it with a grain of salt because the military denied the story was accurate. Which could mean it wasn't true or it could mean they didn't want the public to believe it was true. Though it does sound a bit too human-like for it to pass my sniff test (an AI wouldn't really care that its strategies get vetoed), but it's an amusing anecdote.

[-] [email protected] 2 points 2 weeks ago

The military: it didn't destroy the tower, it jammed the comms!

[-] [email protected] 2 points 2 weeks ago

ai thinks

AI's are Mathematic's calculations. If you ordered that execution, are you responsible for the death? It happened because you didn't write instructions well enough; test check against that which doesn't throw life on the scale; or maybe that's just the cheeky excuse to be used when people start dying before enough haven't done so that no one is left A.S. may do it, if your lucky. Doesn't matter. It'll just bump over from any of its thousand T-ultiverses.

[-] [email protected] 11 points 2 weeks ago* (last edited 2 weeks ago)

The danger isn't that it's smart, the danger is that humans are stupid.

FTFY

[-] [email protected] 3 points 2 weeks ago

Or more precise: The danger is that people think it's smart

this post was submitted on 11 Jun 2024
879 points (98.8% liked)

Lemmy Shitpost

25058 readers
4137 users here now

Welcome to Lemmy Shitpost. Here you can shitpost to your hearts content.

Anything and everything goes. Memes, Jokes, Vents and Banter. Though we still have to comply with lemmy.world instance rules. So behave!


Rules:

1. Be Respectful


Refrain from using harmful language pertaining to a protected characteristic: e.g. race, gender, sexuality, disability or religion.

Refrain from being argumentative when responding or commenting to posts/replies. Personal attacks are not welcome here.

...


2. No Illegal Content


Content that violates the law. Any post/comment found to be in breach of common law will be removed and given to the authorities if required.

That means:

-No promoting violence/threats against any individuals

-No CSA content or Revenge Porn

-No sharing private/personal information (Doxxing)

...


3. No Spam


Posting the same post, no matter the intent is against the rules.

-If you have posted content, please refrain from re-posting said content within this community.

-Do not spam posts with intent to harass, annoy, bully, advertise, scam or harm this community.

-No posting Scams/Advertisements/Phishing Links/IP Grabbers

-No Bots, Bots will be banned from the community.

...


4. No Porn/ExplicitContent


-Do not post explicit content. Lemmy.World is not the instance for NSFW content.

-Do not post Gore or Shock Content.

...


5. No Enciting Harassment,Brigading, Doxxing or Witch Hunts


-Do not Brigade other Communities

-No calls to action against other communities/users within Lemmy or outside of Lemmy.

-No Witch Hunts against users/communities.

-No content that harasses members within or outside of the community.

...


6. NSFW should be behind NSFW tags.


-Content that is NSFW should be behind NSFW tags.

-Content that might be distressing should be kept behind NSFW tags.

...

If you see content that is a breach of the rules, please flag and report the comment and a moderator will take action where they can.


Also check out:

Partnered Communities:

1.Memes

2.Lemmy Review

3.Mildly Infuriating

4.Lemmy Be Wholesome

5.No Stupid Questions

6.You Should Know

7.Comedy Heaven

8.Credible Defense

9.Ten Forward

10.LinuxMemes (Linux themed memes)


Reach out to

All communities included on the sidebar are to be made in compliance with the instance rules. Striker

founded 1 year ago
MODERATORS