Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Deployment takes a tonne load of time to complete now #235

Open
damey2011 opened this issue May 25, 2021 · 2 comments
Open

Deployment takes a tonne load of time to complete now #235

damey2011 opened this issue May 25, 2021 · 2 comments

Comments

@damey2011
Copy link

damey2011 commented May 25, 2021

Deployment used to take 3-4 minutes to complete before, along side the container count checks, but they now take around 15 minutes, because some containers do not terminate quickly. The deployment command used here is

ecs-deploy -v -r $AWS_DEFAULT_REGION -c $ECS_CLUSTER_NAME -n $ECS_SERVICE_NAME -i ${ECR_REPO_URL}:${CI_COMMIT_SHORT_SHA} --max-definitions 10 -t 900 --use-latest-task-def --run-task --wait-for-success -p ${CI_ENVIRONMENT_NAME}

And running the deployment in verbose mode, figured out that ecs-deploy uses the container count to determine if the deployment was complete successfully before it returns a success message, otherwise rolls back. Our timeout used to be 300 seconds, but now we have to start making use of 900 since 5minutes is no longer sufficient to complete deployment.
I would understand that this is from AWS as maybe the way of shutting down old containers have changed, and the new way takes longer, but is it possible to review the strategy the library uses in checking for container deployment success.

A typical output at intervals of checking the deployment state is:

++ jq '[.services[].deployments[]] | length'
+ NUM_DEPLOYMENTS=2
+ '[' 2 -eq 1 ']'
+ sleep 2
+ i=126
+ '[' 126 -lt 900 ']'

The old container doesn't get killed as fast as it used to be anymore.

@algarves
Copy link

Hi @damey2011,

Unfortunately I'm facing the same problem in my pipelines. Has anyone had any progress so far, through an alternative to having a reliable callback from our deploy?

Thanks in advance

@damey2011
Copy link
Author

@algarves Unfortunately, we don't even use this anymore so I can't say. We use EKS now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

2 participants