They added a video player with version 3, I think.
xcjs
Now the question is - are they open sourcing the original Winamp, or the awful replacement?
We all mess up! I hope that helps - let me know if you see improvements!
I think there was a special process to get Nvidia working in WSL. Let me check... (I'm running natively on Linux, so my experience doing it with WSL is limited.)
https://docs.nvidia.com/cuda/wsl-user-guide/index.html - I'm sure you've followed this already, but according to this, it looks like you don't want to install the Nvidia drivers, and only want to install the cuda-toolkit metapackage. I'd follow the instructions from that link closely.
You may also run into performance issues within WSL due to the virtual machine overhead.
Good luck! I'm definitely willing to spend a few minutes offering advice/double checking some configuration settings if things go awry again. Let me know how things go. :-)
It should be split between VRAM and regular RAM, at least if it's a GGUF model. Maybe it's not, and that's what's wrong?
Ok, so using my "older" 2070 Super, I was able to get a response from a 70B parameter model in 9-12 minutes. (Llama 3 in this case.)
I'm fairly certain that you're using your CPU or having another issue. Would you like to try and debug your configuration together?
Unfortunately, I don't expect it to remain free forever.
No offense intended, but are you sure it's using your GPU? Twenty minutes is about how long my CPU-locked instance takes to run some 70B parameter models.
On my RTX 3060, I generally get responses in seconds.
Is it possible that connecting to WiFi triggers app updates in the background? Those can be somewhat intensive especially when batched.
Additionally, the Google Play Store no longer notifies when app updates occur, so this process is a little more opaque than it used to be.
Nice! I didn't realize this was a feature of Joplin!
I'm also going to push forward Tilda, which has been my preferred one for a while due to how minimal the UI is.