this post was submitted on 13 Jun 2024
67 points (83.8% liked)

Open Source

28934 readers
370 users here now

All about open source! Feel free to ask questions, and share news, and interesting stuff!

Useful Links

Rules

Related Communities

Community icon from opensource.org, but we are not affiliated with them.

founded 4 years ago
MODERATORS
top 18 comments
sorted by: hot top controversial new old
[–] [email protected] 14 points 2 weeks ago (1 children)

OSS, local llm, SearXNG. I likey, is there a demo ? SearXNG via VPN has helped unshittifying my search, but GIGO still applies.

[–] [email protected] 2 points 2 weeks ago (2 children)

Dumb question, why do you need VPN to use SearxNG?

[–] [email protected] 4 points 2 weeks ago

You don't, I like it coz it minimizes profiling for the component search engines, and gluetun is right there, just point SearxNG at the proxy. I still get reasonable localized results by chosing a nearby exit node.

[–] [email protected] 3 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

I mean, you probably don't. But any particular instance CAN be collecting IPs and selling them. Or their security could be compromised.

This is without me knowing a ton about this specific piece of software.

Aside from that, I don't toggle my VPN on and off on a per-site basis.

[–] [email protected] 2 points 2 weeks ago

Ok, now I understand what OP meant.

However, I use my own SearxNG instance, so I guess I never thought about it that way.

[–] [email protected] 10 points 2 weeks ago (1 children)

Google using ai, everyone hates it

Some upstart uses ai search, everyone is like woooowowow???

[–] [email protected] 14 points 2 weeks ago (1 children)

LLMs are not necessarily evil. This project seems to be free and open source, and it allows you to run everything locally. Obviously this doesn't solve everything (e.g., the environmental impact of training, systemic bias learned from datasets, usually the weights themselves are derived from questionably collected datasets), but it seems like it's worth keeping an eye on.

Google using ai, everyone hates it

Because Google has a long history of doing the worst shit imaginable with technology immediately. Google (and other corporations) must be viewed with extra suspicion compared to any other group or individual because they are known to be the worst and most likely people to abuse technology.

Literally if Google does literally anything, it sucks by default and it's going to take a lot more proof to convince me otherwise for a given Google product. Same goes for Meta, Apple, and any other corporations.

[–] [email protected] 6 points 2 weeks ago

The main complaints towards Google was LLM maturity, bias and other factors. The same things will be true for any LLM

[–] [email protected] 6 points 2 weeks ago (1 children)

Better not tell perplexity about this.

[–] [email protected] 2 points 2 weeks ago (1 children)
[–] [email protected] 2 points 2 weeks ago

I haven't tried it, but having tried Perplexity, I can say that it's difficult to have something that's worse than it!

[–] [email protected] 5 points 2 weeks ago (2 children)
[–] [email protected] 8 points 2 weeks ago

I've used it, it's pretty rough and unfinished, the current main branch doesn't build without help and you'll need ollama or openai keys.

The results however are impressive, even with a small model like phi3 mini through ollama. They got some good prompts behind it and the results name the sources + have some good followup questions.

[–] [email protected] -3 points 2 weeks ago

I haven’t no

[–] [email protected] 1 points 2 weeks ago

Seems broken, couldn't get the yarn to build. I'll try again another day

[–] [email protected] 1 points 2 weeks ago* (last edited 2 weeks ago)

Super nice!

[–] [email protected] 1 points 2 weeks ago (1 children)

You mean chatgpt or real ai?

[–] [email protected] 1 points 2 weeks ago

It can use ChatGPT I believe, or you could use a local GPT or several other LLM architectures.

GPTs are trained by "trying to fill in the next word", or more simply could be described as a "spicy autocomplete", whereas BERTs try to "fill in the blanks". So it might be worth looking into other LLM architectures if you're not in the market for an autocomplete.

Personally, I'm going to look into this. Also it would furnish a good excuse to learn about Docker and how SearXNG works.