The community members are discussing how to enable the Claude AI assistant from Google Vertex AI in their workflow. They are exploring options like using the CLINE VSCode extension, integrating with LiteLLM to manage different AI providers, and checking if Vertex AI's Claude is compatible with the OpenAI API format used by the Gemini AI assistant in the Marimo notebook environment. The community members also mention issues with hot reloading changes made in the Python file to the Marimo notebook. There is no explicitly marked answer, but the community members provide suggestions and guidance on how to potentially integrate Vertex AI's Claude into their workflow.
Mhm I'll look into it, but for now I'm thinking about a workaround of making the cells in marimo, but opening the actual py file in VSCode with my copilot and CLINE with Claude from Vertex, but I've noticed that simply refreshing notebook after making a change in the py file and saving doesn't trigger any hot reload, not even page refresh, only kernel restart, any ideas?
also there are a bunch of other free (to some extent) APIs and then there is LiteLLM that can change APIs from one to another or something , think it could be integrated into the marimo to run in a separate terminal on the same venv and transpose API calls from one schema to another, but this probably doesnt apply to the Google Vertex AI hosted Claude as that one is authenticated on the machine level or something
---
LiteLLM is a versatile tool designed to streamline interactions with Large Language Models (LLMs) by providing a unified interface and efficient management features.
Key Features:
Unified Interface: LiteLLM standardizes API calls across over 100 LLM providers, including OpenAI, Azure, Hugging Face, and Anthropic, by mapping them to the OpenAI ChatCompletion format. LITELLM DOCS
Load Balancing and Fallbacks: It manages load balancing and implements fallback mechanisms to ensure reliable and efficient LLM usage. LITELLM
Cost Tracking and Budgeting: LiteLLM offers tools to monitor and control spending, allowing users to set budgets and track expenses across different projects and API keys. PYPI
Proxy Server (LLM Gateway): It provides a proxy server that acts as a central service to access multiple LLMs, facilitating load balancing, cost tracking, and customizable logging and guardrails per project. LITELLM DOCS
Python SDK: For developers, LiteLLM offers a Python SDK to integrate LLM functionalities into applications, supporting features like streaming responses and asynchronous operations. PYPI
I haven't looked too deeply into LiteLLM except their Model Providers page; might be worth looking into sometime. Especially since it could serve as a single source/point of contact to help integrate various model providers for AI support. Thanks for the summary.
but any idea about hot realoading a marimo notebook on a change in the python file? I had to restart kernel for changes to show in the notebook after CLINE (vscode extension) edited the .py notebook file, changes from notebook to py seem instant, but the other way kernel reload is a bit of a pain
I don't believe changing code directly in the .py file is intended/right. It is meant to be generated (shows changes instantly) from when you write code in the notebook; if you want really want AI support; there is a lot of support for AI in marimo; autocomplete, AI assist (supporting popular providers) and chat interface.