autonomoususer

joined 2 years ago
MODERATOR OF
llm
[–] autonomoususer@lemmy.world 2 points 2 days ago

Cringe, pirated software fails to include a libre software license text file, like GPL. We do not control it, anti-libre software.

[–] autonomoususer@lemmy.world 10 points 5 days ago* (last edited 5 days ago)

You can't stop them but you can help them learn to make good choices on their own.

[–] autonomoususer@lemmy.world -1 points 1 week ago* (last edited 1 week ago) (1 children)

Dollar store Sci-Hub

[–] autonomoususer@lemmy.world 3 points 1 week ago* (last edited 1 week ago)

Because some of us let them. Paying never works. Libre software does.

[–] autonomoususer@lemmy.world 1 points 1 week ago* (last edited 1 week ago)

How does Lemmy, Tor or Signal make us the product?

Paying never works. Libre software does.

 

cross-posted from: https://lemmy.world/post/27088416

This is an update to a previous post found at https://lemmy.world/post/27013201


Ollama uses the AMD ROCm library which works well with many AMD GPUs not listed as compatible by forcing an LLVM target.

The original Ollama documentation is wrong as the following can not be set for individual GPUs, only all or none, as shown at github.com/ollama/ollama/issues/8473

AMD GPU issue fix

  1. Check your GPU is not already listed as compatibility at github.com/ollama/ollama/blob/main/docs/gpu.md#linux-support
  2. Edit the Ollama service file. This uses the text editor set in the $SYSTEMD_EDITOR environment variable.
sudo systemctl edit ollama.service
  1. Add the following, save and exit. You can try different versions as shown at github.com/ollama/ollama/blob/main/docs/gpu.md#overrides-on-linux
[Service]
Environment="HSA_OVERRIDE_GFX_VERSION=10.3.0"
  1. Restart the Ollama service.
sudo systemctl restart ollama
 

This is an update to a previous post found at https://lemmy.world/post/27013201


Ollama uses the AMD ROCm library which works well with many AMD GPUs not listed as compatible by forcing an LLVM target.

The original Ollama documentation is wrong as the following can not be set for individual GPUs, only all or none, as shown at github.com/ollama/ollama/issues/8473

AMD GPU issue fix

  1. Check your GPU is not already listed as compatibility at github.com/ollama/ollama/blob/main/docs/gpu.md#linux-support
  2. Edit the Ollama service file. This uses the text editor set in the $SYSTEMD_EDITOR environment variable.
sudo systemctl edit ollama.service
  1. Add the following, save and exit. You can try different versions as shown at github.com/ollama/ollama/blob/main/docs/gpu.md#overrides-on-linux
[Service]
Environment="HSA_OVERRIDE_GFX_VERSION=10.3.0"
  1. Restart the Ollama service.
sudo systemctl restart ollama
[–] autonomoususer@lemmy.world 3 points 2 weeks ago* (last edited 2 weeks ago)

Cosmos Cloud fails to include a libre software license text file. We do not control it, anti-libre software. This defeats the purpose of running Ollama, libre software, on our own device.

Also, although Docker on Arch Linux is fine, Docker Desktop used to install Docker on Windows is also anti-libre software. The upcoming guide will provide a workaround. Of course, Windows is anti-libre software, so this is for harm reduction at best.

However, thank you for freeing this information from Discord, also anti-libre software.

 

cross-posted from: https://lemmy.world/post/27013201

Ollama lets you download and run large language models (LLMs) on your device.

Install Ollama on Arch Linux (Windows guide coming soon)

  1. Check whether your device has an AMD GPU, NVIDIA GPU, or no GPU. A GPU is recommended but not required.
  2. Open Console, type only one of the following commands and press return. This may ask for your password but not show you typing it.
sudo pacman -S ollama-rocm    # for AMD GPU
sudo pacman -S ollama-cuda    # for NVIDIA GPU
sudo pacman -S ollama         # for no GPU (for CPU)
  1. Enable the Ollama service [on-device and runs in the background] to start with your device and start it now.
sudo systemctl enable --now ollama

Test Ollama alone (Open WebUI guide coming soon)

  1. Open localhost:11434 in a web browser and you should see Ollama is running. This shows Ollama is installed and its service is running.
  2. Run ollama run deepseek-r1 in a console and ollama ps in another, to download and run the DeepSeek R1 model while seeing whether Ollama is using your slow CPU or fast GPU.

AMD GPU issue fix

https://lemmy.world/post/27088416

[–] autonomoususer@lemmy.world 2 points 2 weeks ago

Modern software is complex, so we need to work together. Libre software defends us computing alone and in groups. Anti-libre licenses stop groups.

 

Ollama lets you download and run large language models (LLMs) on your device.

Install Ollama on Arch Linux (Windows guide coming soon)

  1. Check whether your device has an AMD GPU, NVIDIA GPU, or no GPU. A GPU is recommended but not required.
  2. Open Console, type only one of the following commands and press return. This may ask for your password but not show you typing it.
sudo pacman -S ollama-rocm    # for AMD GPU
sudo pacman -S ollama-cuda    # for NVIDIA GPU
sudo pacman -S ollama         # for no GPU (for CPU)
  1. Enable the Ollama service [on-device and runs in the background] to start with your device and start it now.
sudo systemctl enable --now ollama

Test Ollama alone (Open WebUI guide coming soon)

  1. Open localhost:11434 in a web browser and you should see Ollama is running. This shows Ollama is installed and its service is running.
  2. Run ollama run deepseek-r1 in a console and ollama ps in another, to download and run the DeepSeek R1 model while seeing whether Ollama is using your slow CPU or fast GPU.

AMD GPU issue fix

https://lemmy.world/post/27088416

[–] autonomoususer@lemmy.world 2 points 2 weeks ago* (last edited 2 weeks ago)

No, it shows the bot you are active in chat, so they spam you more.

 

Rules

  1. Please tag [not libre software] and [never on-device] services as such (those not green in the License column here).
  2. Be useful to others

Resources

github.com/ollama/ollama
github.com/open-webui/open-webui
github.com/Aider-AI/aider
wikipedia.org/wiki/List_of_large_language_models

[–] autonomoususer@lemmy.world 1 points 3 weeks ago* (last edited 3 weeks ago) (2 children)

Nothing will unharvest your data. Do your best moving forward.

[–] autonomoususer@lemmy.world 1 points 2 months ago* (last edited 2 months ago)
  1. Check for a libre software license text file to ensure we control the software.
[–] autonomoususer@lemmy.world 1 points 6 months ago

Companies these days don't want to pay!

view more: next ›