Ollama for API, which you can integrate into Open WebUI. You can also integrate image generation with ComfyUI I believe.
It's less of a hassle to use Docker for Open WebUI, but ollama works as a regular CLI tool.
I heard that Poland is also cheering for some MAGA guy in the next election... Troubling times ahead.
For Romania, there might still be a chance in the run-off. However, the difference between the two candidates was quite large (20% difference; 1.8 million votes). Similarly, the other candidates seemed to have voters that would rather vote for the nazi. Most likely all hope is lost, but that 1% chance is still there.
You're right! Sorry for the typo. The older nomic-embed-text
model is often used in examples, but granite-embedding
is a more recent one and smaller for English-only text (30M parameters). If your use case is multi-language, they also offer a bigger one (278M parameters) that can handle English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, Chinese (Simplified). I would test them out a bit to see what works best for you.
Furthermore, if you're not dependent on MariaDB for something else in your system, there are also some other vector databases I would recommend. Qdrant also works quite well, and you can integrate it pretty easily in something like LangChain. It really depends on how much you want to push your RAG workflow, but let me know if you have any other questions.
My bad, I initially read that we should give Florida to Russia in exchange for Ukraine getting back Crimea, haha. I said giving Alaska instead because it was also part of Russia before, so they could spin it off the narrative similarly to what they did to Ukraine.
But yeah, I don't think the clown in the White House can really understand the situation unless you put it in perspective for him.
All the ones I mentioned can be installed with pip or uv if I am not mistaken. It would probably be more finicky than containers that you can put behind a reverse proxy, but it is possible if you wish to go that route. Ollama will also run system-wide, so any project will be able to use its API without you having to create a separate environment and download the same model twice in order to use it.