Get help from the marimo community

Updated 3 weeks ago

How to use local AI model like Llama instead of OpenAI

Has anyone tried using a local API endpoint for an AI assistant instead of the OpenAI API?
I have Ollama running on my laptop, along with several models to assist with my tasks. That’s why I want to use this local model instead of relying on Anthropic, OpenAI, or Google. By the way, I’m using Llama and Open WebUI.
e
C
9 comments
But I’ve another question that while trying to select another model from the dropdown it seems I couldn’t select and no others ml model isn’t showing in the dropdown menu
Is there any way to fix this?
may i ask which dropdown are you dealing with? i can't see any dropdown here
Attachment
Screenshot_2025-01-12_at_18.59.05.png
that's a text input area, you can input the model name there by hand, like llama3.1 or qwen2.5:0.5b, and the ai assist will automatically use that model

you can use the ollama ls command to list all available models
Oooo, I got it. Thanks for your help.
Add a reply
Sign up and join the conversation on Discord