coolin

joined 1 year ago
[–] [email protected] 0 points 10 months ago (3 children)

Only thing really missing is Wallet and NFC support. Other than that I think Graphene and Lineage OS cover it all

[–] [email protected] 2 points 10 months ago (1 children)

Yeah there's no way a viable Linux phone could be made without the ability to run Android apps.

I think we're probably at least a few years away from being able to daily drive Linux on modern phones with functioning things like NFC payments and a decent native app collection. It's definitely coming but it has far less momentum than even the Linux desktop does.

[–] [email protected] 2 points 11 months ago

Smh my head, Linux is too mainstream now!!! How will I be a cool hacker boy away from society if everyone else uses it!!!!!!!

[–] [email protected] 2 points 11 months ago

I think this is downplaying what LLMs do. Yeah, they are not the best at doing things in general, but the fact that they were able to learn the structure and semantic context of language is quite impressive, even if it doesn't know what the words converted into tokens actually mean. I suspect that we will be able to use LLMs as one part of a full digital "brain", with some model similar to our own prefrontal cortex calling the LLM (and other things like vision model, sound model, etc.) and using its output to reason about a certain task and take an action. That's where I think the hype will be validated, is when you put all these parts we've been working on together and Frankenstein a new and actually intelligent system.

[–] [email protected] 0 points 1 year ago

I can't think of a time he's said any slur, but there is a particular video I would be interested to see it

[–] [email protected] 0 points 1 year ago (2 children)

This isn't an actual problem. Can you train on post-ChatGPT internet text? No, but you can train on the pre-ChatGPT common crawls, the millions of conversations people have with the models and on audio, video and images. As we improve training techniques and model architectures, we will need even less of this data to train even more performant models.