Has anyone tried using a local API endpoint for an AI assistant instead of the OpenAI API? I have Ollama running on my laptop, along with several models to assist with my tasks. That’s why I want to use this local model instead of relying on Anthropic, OpenAI, or Google. By the way, I’m using Llama and Open WebUI.
But I’ve another question that while trying to select another model from the dropdown it seems I couldn’t select and no others ml model isn’t showing in the dropdown menu
that's a text input area, you can input the model name there by hand, like llama3.1 or qwen2.5:0.5b, and the ai assist will automatically use that model
you can use the ollama ls command to list all available models