Get help from the marimo community

Updated 4 weeks ago

0.11.6 "Something went wrong. Failed to instantiate." after a period of time.

I've deployed 0.11.6, have logged in, and when I let Marimo sit there (testing out the duplication bugfix), at some point the web socket connection "fails".

I'm attaching screenshots in hope that this helps, but let me know if I can provide any additional details.

I did use the ask-docs-ai channel and it mentioned to ensure that I don't have a duplicate instance, which I don't.
Attachments
image.png
image.png
image.png
M
R
15 comments
how did you deploy it? do you have multiple servers running?
I deployed it via Portainer, and only one instance is running. I created this Dockerfile.

Plain Text
FROM python:3.13.1-slim

COPY --from=ghcr.io/astral-sh/uv:0.4.20 /uv /bin/uv
ENV VIRTUAL_ENV=/home/app_user/venv

RUN apt-get update && apt-get install -y curl && rm -rf /var/lib/apt/lists/*

RUN useradd -m app_user
RUN uv venv $VIRTUAL_ENV
RUN chown -R app_user:app_user $VIRTUAL_ENV

USER app_user
ENV PATH="$VIRTUAL_ENV/bin:$PATH"

# Copy requirements (it will be used by the entrypoint).
COPY --chown=app_user:app_user requirements.txt /home/app_user/requirements.txt

# Install standard packages into our image that won't change.
# We do this for container startup / restart speed
RUN . $VIRTUAL_ENV/bin/activate && uv pip install -U \
    marimo \
    marimo[recommended] \
    marimo[sql] \
    pandas \
    psycopg2-binary \
    pymongo \
    pyvis \
    requests

# Copy the entrypoint script.
COPY --chown=app_user:app_user entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh

WORKDIR /home/app_user/notebooks
EXPOSE 2718

# Use the entrypoint script to install dependencies at container start.
ENTRYPOINT ["/entrypoint.sh"]
I use the following service definition in a compose file to deploy via git.

Plain Text
services:
  marimo:
    build:
      context: .
      dockerfile: Dockerfile
    networks:
      - lab
    ports:
      - "127.0.0.1:2718:2718"
    restart: unless-stopped
    volumes:
      - /opt/docker/marimo/marimo_notebooks:/home/app_user/notebooks
      - /opt/docker/marimo/.marimo.toml:/home/app_user/.marimo.toml
    environment:
      MARIMO_HOST: ${MARIMO_HOST}
      MARIMO_PORT: ${MARIMO_PORT}
      MARIMO_TOKEN_PASSWORD: ${MARIMO_TOKEN_PASSWORD}
    command: bash -c 'source /home/app_user/venv/bin/activate && marimo edit --headless --token --token-password ${MARIMO_TOKEN_PASSWORD} -p ${MARIMO_PORT} --host ${MARIMO_HOST}'
    healthcheck:
      test: [ "CMD", "curl", "-f", "http://localhost:2718/health" ]
      interval: 30s
      timeout: 3s
      retries: 3

networks:
  lab:
    external: true
I should note that in 0.11.2 and 0.11.5 this exact deployment didn't throw the errors I showed above. And for additional context, when those errors are thrown, the container restarts itself (presumably because the process is dying somehow)
is there anything in the logs? the 401 might indicate that either headers are not being proxied / missing or auth is incorrect
but if nothing changed from your end, maybe something was introduced on our end
No other logs other than browser logging in the screenshots above. I was feeling squirrely and decided to sit and watch the container fail to see if I could capture any stdout and the only other thing logged than the usual startup message, there was a "thanks for using marimo!" right before the container died.

I'm going to keep digging on my end too! 🀝
i wonder if it could be related to this: and something is sending a keyboard interrupt (which would close the container and show "thanks for using marimo!") actually i think this is unrelated
I don't know if this is helpful or not, but I'm attaching the console log and network logs

at some point in the console logs it shows this "usewebsocket is unmounting. this likely means there is a bug"

I also did one other thing. I deployed 0.11.6 locally and cannot reproduce the bug. (Again, not sure what is helpful and not so I always caution on the side of oversharing :))
Attachment
image.png
this is helpful, thank you for sharing. i am looking into it now
@Myles Scolnick I figured it out. You can stop looking at it. Let me do a writeup for you really quick in case you're curious.
Don't thank me. I need to thank you in advance for not shanking me.

TL;DR
I don't think that word means what you think it means

In Portainer, I misunderstood what this flag did. My understanding was that it would force redeployment IF there was a delta between the locally stored copy and copy stored in git (i.e. - there was a change to the repo, trigger a redeployment).

I thought it was weird that a local instance didn't close the websocket, but Portainer did. Then when you mentioned that you'd get that goodbye message from a SIGINT, it got me wondering what would possibly throw a SIGINT, and that led to me toggling off the forced redeployment. Sure as sh--, that was it.

I'm terribly sorry for wasting your time, but hopefully the above will help you/someone in the future if it ever comes up again.
Attachment
image.png
oh wow super interesting. thanks for finding. the write up is good for others to follow.

many for deployed apps we could emit a log (e.g. "received shutdown event") to help with the debugging
Add a reply
Sign up and join the conversation on Discord