this post was submitted on 12 Jun 2024
1315 points (98.7% liked)

Memes

44124 readers
1790 users here now

Rules:

  1. Be civil and nice.
  2. Try not to excessively repost, as a rule of thumb, wait at least 2 months to do it if you have to.

founded 5 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 0 points 3 weeks ago (2 children)

Well, most of the requests are handled on device

Doubt.

Voice recognition, image recognition, yes. But actual questions will go to Apple servers.

[–] [email protected] 13 points 3 weeks ago* (last edited 3 weeks ago) (1 children)

Doubt.

Is this conjecture or can you provide some further reading, in the interest of not spreading misinformation.

Edit: I decided to read the info from Apple.

With Private Cloud Compute, Apple sets a new standard for privacy in AI, with the ability to flex and scale computational capacity between on-device processing, and larger, server-based models that run on dedicated Apple silicon servers. When requests are routed to Private Cloud Compute, data is not stored or made accessible to Apple and is only used to fulfill the user’s requests, and independent experts can verify this privacy.

Additionally, access to ChatGPT is integrated into Siri and systemwide Writing Tools across Apple’s platforms, allowing users to access its expertise — as well as its image- and document-understanding capabilities — without needing to jump between tools.

Say what you will about Apple, but privacy isn’t a concern for me. Perhaps, some independent experts will verify this in time.

[–] [email protected] 3 points 3 weeks ago (1 children)

Which is exactly what I said. It's not local.

That they are keeping the data you send private is irrelevant to the OP claim that the AI model answering questions is local.

[–] [email protected] 3 points 3 weeks ago

OP here being me.

Well, most of the requests are handled on device with their own models. If it’s going to ChatGPT for something it will ask for permission and then use ChatGPT.

I feel I was pretty explicit in explaining how some requests will go to ChatGPT.

[–] [email protected] 3 points 3 weeks ago

Apple has published papers on small LLM models and multimodal models already. I would be surprised if they aren't using them for on-device processing.