[-] [email protected] 15 points 1 day ago

The ISS is aging, and for safety’s sake, NASA intends to incinerate the immense facility around 2031. To accomplish the job, the agency will pay SpaceX up to $843 million, according to a statement released on June 26.

See you guys in 2040

[-] [email protected] 19 points 1 day ago

Let them Fight

[-] [email protected] 4 points 4 days ago

Those cost efficiencies are also at the expense of the Chinese government. The massive investment is all part of their green revolution policy package.

It's why Solar cells are also incredibly cheap to produce in China, and why they're also mostly sold in China.

[-] [email protected] 11 points 4 days ago

The OSI just published a resultnof some of the discussions around their upcoming Open Source AI Definition. It seems like a good idea to read it and see some of the issues they're trying to work around...

https://opensource.org/blog/explaining-the-concept-of-data-information

[-] [email protected] 1 points 1 week ago

Yes of course, there's nothing gestalt about model training, fixed inputs result in fixed outputs

[-] [email protected] 8 points 2 weeks ago

I suppose the importance of the openness of the training data depends on your view of what a model is doing.

If you feel like a model is more like a media file that the model loaders are playing back, where the prompt is more of a type of control over how you access this model then yes I suppose from a trustworthiness aspect there's not much to the model's training corpus being open

I see models more in terms of how any other text encoder or serializer would work, if you were, say, manually encoding text. While there is a very low chance of any "malicious code" being executed, the importance is in the fact that you can check the expectations about how your inputs are being encoded against what the provider is telling you.

As an example attack vector, much like with something like a malicious replacement technique for anything, if I were to download a pre-trained model from what I thought was a reputable source, but was man-in-the middled and provided with a maliciously trained model, suddenly the system I was relying on that uses that model is compromised in terms of the expected text output. Obviously that exact problem could be fixed with some has checking but I hope you see that in some cases even that wouldn't be enough. (Such as malicious "official" providence)

As these models become more prevalent, being able to guarantee integrity will become more and more of an issue.

[-] [email protected] 3 points 2 weeks ago

I've seen this said multiple times, but I'm not sure where the idea that model training is inherently non-deterministic is coming from. I've trained a few very tiny models deterministically before...

[-] [email protected] 3 points 2 weeks ago

I'm not sure where you get that idea. Model training isn't inherently non-deterministic. Making fully reproducible models is 360ai's apparent entire modus operandi.

[-] [email protected] 61 points 2 weeks ago

There are VERY FEW fully open LLMs. Most are the equivalent of source-available in licensing and at best, they're only partially open source because they provide you with the pretrained model.

To be fully open source they need to publish both the model and the training data. The importance is being "fully reproducible" in order to make the model trustworthy.

In that vein there's at least one project that's turning out great so far:

https://www.llm360.ai/

[-] [email protected] 33 points 2 weeks ago

Holy crap there are still working nitter instances? God bless

[-] [email protected] 7 points 3 weeks ago

You could try Guix! It's ostensibly source based but you can use precompiled binaries as well (using the substitute system)

It's a source-first Functional package distro like Nix but uses Scheme to define everything from the packages to the way the init system (Shepherd) works.

It's very different from other distros but between being functional, source-first, and having shepherd, I personally love it

[-] [email protected] 23 points 3 weeks ago

thinking about Werner Von Braun and the peenemunde Yea I'm not sure anyone has a leg to stand on when it comes to "stolen" technology and space

view more: next ›

WalnutLum

joined 6 months ago