Free Open-Source Artificial Intelligence

3578 readers
1 users here now

Welcome to Free Open-Source Artificial Intelligence!

We are a community dedicated to forwarding the availability and access to:

Free Open Source Artificial Intelligence (F.O.S.A.I.)

More AI Communities

LLM Leaderboards

Developer Resources

GitHub Projects

FOSAI Time Capsule

founded 2 years ago
MODERATORS
1
2
3
 
 

Background: This Nomic blog article from September 2023 promises better performance in GPT4All for AMD graphics card owners.

Run LLMs on Any GPU: GPT4All Universal GPU Support

Likewise on GPT4All's GitHub page.

September 18th, 2023: Nomic Vulkan launches supporting local LLM inference on NVIDIA and AMD GPUs.

Problem: In GPT4All, under Settings > Application Settings > Device, I've selected my AMD graphics card, but I'm seeing no improvement over CPU performance. In both cases (AMD graphics card or CPU), it crawls along at about 4-5 tokens per second. The interaction in the screenshot below took 174 seconds to generate the response.

Question: Do I have to use a specific model to benefit from this advancement? Do I need to install a different AMD driver? What steps can I take to troubleshoot this?

Sorry if this is an obvious question. Sometimes I feel like the answer is right in front of me, but I'm unsure of which key words from the documentation should jump out at me.

My system info:

  • GPU: Radeon RX 6750 XT
  • CPU: Ryzen 7 5800X3D processor
  • RAM: 32 GB @ 3200 MHz
  • OS: Linux Bazzite
  • I've installed GPT4All as a flatpak
4
 
 

I don't have many specific requirements, and GPT4All is working mostly well for me so far. That said, my latest use case for GPT4All is to help me plan a new Python-based project with examples as code snippets, and it lacks a specific quality of life feature, that is the "Copy Code" button.

There is an open issue on GPT4All's GitHub, but as there is no guarantee that feature will ever be implemented, I thought I'd take this opportunity to explore if there are any other tools out there like GPT4All that offer a ChatGPT-like experience in the local environment. I'm neither a professional developer nor a sysadmin, so a lot of self hosting guides go over my head, which is what drew me to GPT4All in the first place, as it's very accessible to non-developers like myself. That said, I'm open to suggestions and willing to learn new skills if that's what it takes.

I'm running on Linux w/ AMD hardware: Ryzen 7 5800X3D processor + Radeon RX 6750 XT.

Any suggestions? Thanks in advance!

5
6
 
 

the goal is to have an agent that can:

  • Understand a complex problem description.
  • Generate initial algorithmic solutions.
  • Rigorously test its own code.
  • Learn from failures and successes.
  • Evolve increasingly sophisticated and efficient algorithms over time.

https://storage.googleapis.com/deepmind-media/DeepMind.com/Blog/alphaevolve-a-gemini-powered-coding-agent-for-designing-advanced-algorithms/AlphaEvolve.pdf

7
8
9
10
11
12
13
14
15
16
17
18
19
 
 

im building som dum lil foss llm thingy for godot and now im interested in letting users implement their own MCP servers.

so like - okay, the model context protocol page says, that most servers use stdio for every interaction. So now - the request format can be seen here, its apparently a JSONrpc thing.

so - first thing i want to do is retrieving all the capabilities the server has.

i looked through all the tabs in the latest docs, but could not find the command for listing all the capabilities. so i installed some filesystem mcp server which runs well and tried this:

PS C:\Users\praktikant> npx -y @modelcontextprotocol/server-filesystem "C:\Users\praktikant\Desktop"
Secure MCP Filesystem Server running on stdio
Allowed directories: [ 'C:\\Users\\praktikant\\Desktop' ]
{\
"jsonrpc": "2.0",\
"id": 1,\
"method": "capabilities",\
"params": {}\
}

- aaaaaand nothing was returned. no string, no nothing.

so maybe its not a string which is sent via stdio but some other byte-based thing?

if anyone has experience with this, or is gud at guessing, pls tell me what u think i might be missing here <3

20
 
 

There are two main approaches in total:

  1. Step to step
  2. Begin to steps to end
    Currently, these are the two mainstream methods of instantiation.

It is widely recognized that if AI is not aligned with human values, it could cause harm to society.
Yet, this does not mean such systems lack intelligence.

So, what truly defines intelligence?
Why do so many researchers focus solely on intelligence aligned with human values?
Is it because their own understanding is limited, or because machines are not yet truly intelligent?

I believe intelligence should not be confined to narrow, human-centric definitions.
What we call "intelligence" today might be an illusion.
True intelligence cannot be defined—
the moment we define it, we lose its essence.

21
 
 

please tell me, thanks

22
 
 

Today we announce Mistral Small 3.1: the best model in its weight class.

Building on Mistral Small 3, this new model comes with improved text performance, multimodal understanding, and an expanded context window of up to 128k tokens. The model outperforms comparable models like Gemma 3 and GPT-4o Mini, while delivering inference speeds of 150 tokens per second.

Mistral Small 3.1 is released under an Apache 2.0 license.

23
24
 
 

Hello, I am currently using codename goose as an AI client to proofread and help me with coding. I have it setup towards Googles Gemini, however I find myself quickly running out of tokens with large files. I was wondering if there are any easy way to self host an AI with similar capabilites but still have access to read and write files. I've tried both ollama and Jan, but neither have access to my files. Any recommendations?

25
 
 

There are lots of general-purpose models to use locally, and also coding-specific models.

But are there models specialized in one programming language? My thought was that a model that only needs to handle one language (e.g. Python) could be faster, or be better for a given size.

E.g If I need to code in Rust and is limited to an 8B model to run locally, I was hoping to get better results with a model that is narrower. I don't need it to be able to help with Java.

This approach would of course require switching models, but that's no problem for me.

view more: next ›