Get help from the marimo community

Home
Members
martketneutral
m
martketneutral
Offline, last seen 4 months ago
Joined November 25, 2024
I am using vLLM to serve a local LLM using the vLLM OpenAI compatible server. I can set the URL, API key, and the Model successfully in the marimo Settings pane.

When I hit "generate", marimo sees the server but the call violates a requirement for the chat role to alternate between "assistant" and "user".

Plain Text
openai.BadRequestError: Error code: 400 - {'object': 'error', 'message': 'Conversation roles must alternate user/assistant/user/assistant/...', 'type': 'BadRequestError', 'param': None, 'code': 400}