this post was submitted on 13 Sep 2023
264 points (98.5% liked)

Asklemmy

42502 readers
1355 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy ๐Ÿ”

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_[email protected]~

founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[โ€“] [email protected] 5 points 9 months ago (3 children)

The lawyer fuck up is what happens when someone doesn't know or understand the limitations of a LLM.

If you want a GPT model tailored and specialized for a specific task, you have to train it with custom data, fine tune it and tweak the model's parameters. You cannot do that from the ChatGPT web/app, you need a custom implementation coded in Python or some other language.

[โ€“] [email protected] 2 points 9 months ago (1 children)

There are some uis that allow for fine tuning (assuming you have an extremely high end rig designed for ml). For example ChatGPT alternative and DALLE alternative.

[โ€“] [email protected] 2 points 9 months ago (1 children)

Thanks. I have a quite powerful rig, but at the moment I work with OpenAI's API using GPT 3.5 Turbo using a custom (but shitty) Python script with a simple Gradio web interface. However, I mostly stopped improving or updating it months ago. As long as I don't use LlamaIndex, the cost is quite low.

I already use Stable Diffusion WebUI, tho.

Also the "fine tuning" I was talking about is this https://platform.openai.com/docs/guides/fine-tuning

[โ€“] [email protected] 2 points 9 months ago

I am aware what fine tuning is. It is available from the train tab while the base checkpoint is loaded in both cases.

[โ€“] [email protected] 2 points 9 months ago* (last edited 9 months ago)

I also don't think that the ChatGPT model is able to do something that requires referencing case law or medical texts or whatever else at all in its current form. The way it works by generating probabilities for certain words is all wrong for doing something where the value of the output isn't subjective - you need the model to be able to distinguish between facts and opinion, you need it to be able to cite sources for what it says, you need it to be able to produce coherent cause and effect chains and formulate an argument, all things which no currently existing LLM is capable of no matter how much you fine tune it because of how it works.

[โ€“] [email protected] 1 points 9 months ago (1 children)

I'm glad you understand my point. Chatgpt is not Google. It's a language model that will give you something that looks like the thing you asked for it to provide. It can and will pull facts out of its recycle bin if it fits the cadence of what it expects the answer to look like.

[โ€“] [email protected] 1 points 9 months ago* (last edited 9 months ago) (1 children)

ChatGPT is not Google, but sometimes it can work as a glorified search engine or even compete with asking in forums.

I've lost count of how many times ChatGPT has produced Bash or Python code for what I needed. Yes, sometimes the code is wrong and/or requires tweaking and sometimes I resorted to look into the documentation, but no one will answer faster and anytime of the day like ChatGPT does, at least not for free.

[โ€“] [email protected] 1 points 9 months ago (1 children)

It's a tool to aid in creating a product, not a tool that magics out a finished product. That's my point. Too many people use it as the latter instead of the former.

[โ€“] [email protected] 1 points 9 months ago

100% agree.

Maybe, with lots of training, weaking and testing the latter could be achieved, but that's it.