Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] compose watch combined with depends_on can lead to failure of dependencies #11695

Open
nikita-vanyasin opened this issue Apr 6, 2024 · 1 comment

Comments

@nikita-vanyasin
Copy link

nikita-vanyasin commented Apr 6, 2024

Description

I've started to use 'watch' feature for one project, but it fails every time when I change files from state A to state B and then back to state A.

Steps To Reproduce

  1. Compose yaml (truncated):
apiserver:
  build:
    context: ./backend
    dockerfile: api.Dockerfile
  container_name: apiserver
  pull_policy: build
  restart: always
  depends_on:
    - "db"
    - "rabbit"
  expose:
    - 3000
  volumes:
    - /etc/localtime:/etc/localtime:ro
  develop:
    watch:
      - path: ./backend/
        action: rebuild

analytics:
  build:
    context: ./backend
    dockerfile: analytics.Dockerfile
  container_name: analytics
  pull_policy: build
  restart: always
  depends_on:
    - "db"
    - "rabbit"
  volumes:
    - /etc/localtime:/etc/localtime:ro
  develop:
    watch:
      - path: ./backend/
        action: rebuild

# ...
# more services here. No more "depends_on" statements.

  1. Run watch: docker compose watch.
  2. Edit file under ./backend folder, e.g. add new line.
  3. Watch detects changes and rebuilds both services as expected.
  4. Edit same file under ./backend folder bringing it to the state it was before step 3 (remove new line).
  5. Watch starts to rebuild both services, but fails:
...
service "apiserver" successfully built
Failed to recreate service after update. Error: Error response from daemon: Conflict. The container name "/eaad56074ed9_db" is already in use by container "30b1ad6dcf9135f90dcef865d7de2bfaf1e6a2faa4b3c1ac56351d97e18cf2fa". You have to remove (or rename) that container to be able to reuse that name.
WARN[0011] Error handling changed files for service apiserver: Error response from daemon: Conflict. The container name "/eaad56074ed9_db" is already in use by container "30b1ad6dcf9135f90dcef865d7de2bfaf1e6a2faa4b3c1ac56351d97e18cf2fa". You have to remove (or rename) that container to be able to reuse that name. 
Failed to recreate service after update. Error: Error response from daemon: No such container: 30b1ad6dcf9135f90dcef865d7de2bfaf1e6a2faa4b3c1ac56351d97e18cf2fa
WARN[0012] Error handling changed files for service analytics: Error response from daemon: No such container: 30b1ad6dcf9135f90dcef865d7de2bfaf1e6a2faa4b3c1ac56351d97e18cf2fa 

As you can see, the error mentions container db. This container dies.
Looks like compose is trying to restart dependencies after rebuilding apiserver but somehow conflict occurs.

The problem does not occur when I run watch only for one service (docker compose watch apiserver)

Compose Version

# docker compose version
Docker Compose version 2.26.1

# docker-compose version
Docker Compose version 2.26.1

Docker Environment

Client:
 Version:    26.0.0
 Context:    default
 Debug Mode: false
 Plugins:
  buildx: Docker Buildx (Docker Inc.)
    Version:  0.13.1
    Path:     /usr/lib/docker/cli-plugins/docker-buildx
  compose: Docker Compose (Docker Inc.)
    Version:  2.26.1
    Path:     /usr/lib/docker/cli-plugins/docker-compose
  scan: Docker Scan (Docker Inc.)
    Version:  v0.1.0-280-gc7fa31d4c4
    Path:     /usr/lib/docker/cli-plugins/docker-scan

Server:
 Containers: 7
  Running: 7
  Paused: 0
  Stopped: 0
 Images: 632
 Server Version: 26.0.0
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Using metacopy: true
  Native Overlay Diff: false
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: systemd
 Cgroup Version: 2
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local splunk syslog
 Swarm: inactive
 Runtimes: io.containerd.runc.v2 runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: dcf2847247e18caba8dce86522029642f60fe96b.m
 runc version:
 init version: de40ad0
 Security Options:
  seccomp
   Profile: builtin
  cgroupns
 Kernel Version: 6.8.2-arch2-1
 Operating System: Arch Linux
 OSType: linux
 Architecture: x86_64
 CPUs: 8
 Total Memory: 31.08GiB
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false
 Default Address Pools:
   Base: 192.168.128.0/20, Size: 24

Anything else?

This might be related to #9014

@nikita-vanyasin
Copy link
Author

After some more time using watch feature, I can say that the bug in the topic starter does not seem related to "depends_on" field. I've removed all depends_on fields and still getting same error about container conflict

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants