this post was submitted on 13 Sep 2023
52 points (78.9% liked)

Technology

33586 readers
222 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 5 years ago
MODERATORS
 

Some argue that bots should be entitled to ingest any content they see, because people can.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 3 points 9 months ago* (last edited 9 months ago) (2 children)

Well what an interesting question.

Let's look at the definitions in Wikipedia:

Sentience is the ability to experience feelings and sensations.

Experience refers to conscious events in general [...].

Feelings are subjective self-contained phenomenal experiences.

Alright, let's do a thought experiment under the assumptions that:

  • experience refers to the ability to retain information and apply it in some regard
  • phenomenal experiences can be described by a combination of sensoric data in some fashion
  • performance is not relevant, as for the theoretical possibility, we only need to assume that with infinite time and infinite resources the simulation of sentience through AI needs to be possible

AI works by telling it what information goes in and what goes out, and it therefore infers the same for new patterns of information and it adjusts to "how wrong it was" to approximate the correction. Every feeling in our body is either chemical or physical, so it can be measured / simulated through data input for simplicity sake.

Let's also say for our experiment that the appropriate output it is to describe the feeling.

Now I think, knowing this, and knowing how good different AIs can already comment on, summarize or do any other transformative task on bigger texts that exposes them to interpretation of data, that it should be able to "express" what it feels. Let's also conclude that based on the fact that everything needed to simulate feeling or sensation it can be described using different inputs of data points.

This brings me to the logical second conclusion that there's nothing scientifically speaking of sentience that we wouldn't be able to simulate already (in light of our assumptions).

Bonus: while my little experiment is only designed for theoretical possibility and we'd need some proper statistical calculations to know if this is practical in a realistic timeframe already and with a limited amount of resources, there's nothing saying it can't. I guess we have to wait for someone to try it to be sure.

[–] [email protected] 6 points 9 months ago* (last edited 9 months ago) (1 children)

At the moment, we have LLMs. When (if) we get to a point of a true thinking entity that can do more than parrot back a convincing puree of the (large) model (of things people say) it was trained on, then we can determine what access it should be allowed, but since we’re nowhere near that point and have no idea of the nature of the things we’ll create along the path to get there, sweeping declarations of its rights are premature.

[–] [email protected] 2 points 9 months ago* (last edited 9 months ago) (1 children)

Interesting, please tell me how 'parroting back a convincing puree of the model it was trained on' is in any way different from what humans are doing.

[–] [email protected] 2 points 9 months ago (1 children)

And that is the point.

It sounds stupidly simple, but AIs in itself was the idea to do the learning and solving problems more like a human would. By learning how to solve similar problems, and transfer the knowledge to a new problem.

Technically there's an argument that our brain is nothing more than an AI with some special features (chemicals for feelings, reflexes, etc). But it's good to remind ourselves we are nothing inherently special. Although all of us are free to feel special of course.

[–] [email protected] 1 points 9 months ago

But we make the laws, and have the privilege of making them pro-human. It may be important in the larger philosophical sense to meditate on the difference between AIs and human intelligence, but in the immediate term we have the problem that some people want AIs to be able to freely ingest and repeat what humans spent a lot of time collecting and authoring in copyrighted books. Often, without even paying for a copy of the book that was used to train the AI.

As humans, we can write the law to be pro-human and facilitate human creativity.