Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added option to run existing swarm service #49

Open
wants to merge 23 commits into
base: master
Choose a base branch
from

Conversation

Duvel
Copy link

@Duvel Duvel commented Sep 5, 2018

Solves #34

Just supply the service name and it will be scaled to 1 and after running back to 0.

Also fixed:

  • error was ignored
  • logs are available for logging

@codecov
Copy link

codecov bot commented Sep 5, 2018

Codecov Report

Merging #49 into master will decrease coverage by 3.83%.
The diff coverage is 25%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master      #49      +/-   ##
==========================================
- Coverage   67.67%   63.83%   -3.84%     
==========================================
  Files          16       16              
  Lines         826      896      +70     
==========================================
+ Hits          559      572      +13     
- Misses        210      264      +54     
- Partials       57       60       +3
Impacted Files Coverage Δ
core/runservice.go 47.64% <25%> (-20.36%) ⬇️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 7ba06cc...9493e79. Read the comment docs.

@Duvel
Copy link
Author

Duvel commented Sep 11, 2018

Solved #51 as well.

@taraspos
Copy link
Collaborator

taraspos commented Nov 4, 2019

Hey @Duvel
I picked up maintaining of this project and wanted to merge your changes, however, it seems like it brakes the tests, you can find the error in the travis build.

Would be great if you could take a look, even if this PR is already a year old :D

Thanks!

@Duvel
Copy link
Author

Duvel commented Nov 5, 2019

Hi @trane9991 ,

Good to see that somebody is picking up this project!

I had a look at travis, but without a decent stacktrace it took some time. I never code in Go, so I'm not that familiar with it.

But I think I found an issue in the code and now trying to do a fix.

@Duvel
Copy link
Author

Duvel commented Nov 5, 2019

As I'm away from my computer for a week, I just edited it in Github. I hope it works now.

@G-regL
Copy link

G-regL commented Nov 7, 2019

Is it possible to slightly modify this PR so that we can run a command and not just scale the service up?

Or alternatively, make it so if we provide both a service name and command, that it runs that command inside one of the already existing tasks for that service? If there's no tasks for the listed service, it could scale it up, run the command, and then scale it back down.

@Duvel
Copy link
Author

Duvel commented Nov 12, 2019

@G-regL You already can run a command as a service, this PR runs an already existing service.

Executing a command in an existing container of a service is quit something different, so I would suggest to create a new issue to request this, if the existing option doesn't work.

@Duvel
Copy link
Author

Duvel commented Dec 9, 2019

Hi @trane9991, any updates on this?

@taraspos
Copy link
Collaborator

Hey, sorry, I didn't really have much time to look into this.
I'm traveling at the moment and not sure when I will be able to review/test/merge it, but I will try ASAP, sorry.

@Duvel
Copy link
Author

Duvel commented Dec 14, 2019

No rush, just wondering :) I'll see next week if I can resolve the conflicts with master.

@Duvel
Copy link
Author

Duvel commented Dec 25, 2019

Giving up for now, I don’t know enough of Go to fix this error.

@adrian549092
Copy link

Hey Guys, sorry to comment here for an issue that is not being addressed in this PR, but I am hoping to bring some attention to it as it is impacting my deployment. #43 was opened in June 2018, would really appreciate it if someone could take a look at it.

@Duvel
Copy link
Author

Duvel commented Apr 1, 2020

@trane9991 I see that I forgot to let you know, that I was able to fix the last error and that the build is successful.

@taraspos
Copy link
Collaborator

taraspos commented Apr 1, 2020

Thanks, @Duvel,
I will take a look here ASAP (probably on the weekend).

@taraspos
Copy link
Collaborator

taraspos commented Apr 1, 2020

Hey @adrian549092 , sorry for my absence, does the issue still affects your deployment?
If so, I will try to take a look at the weekend as well.

@adrian549092
Copy link

Hey @adrian549092 , sorry for my absence, does the issue still affects your deployment?
If so, I will try to take a look at the weekend as well.

Hey @trane9991, Thanks for reaching out. I managed to get around it, but I could really use it if was fixed.

@taraspos
Copy link
Collaborator

taraspos commented Apr 5, 2020

Hey @Duvel, I was thinking about this functionality, and I'm not sure if this the right way to implement this because it looks too confusing, the job name says job-service-run, but what it actually does is scaling the service. Swarm doesn't seem to be designed for this purpose.

I have the following ideas:

  1. What if, instead of scaling service, we create new service, which will be a copy of the target service (except setting the MaxRetryPolicy and even overwriting Command) and removed afterward, as done with the current implementation of job-service-run?
  2. If scaling service is needed indeed, we could create a separate job job-service-scale, which will accept parameter like replicas, which can be set to 0, 1, ...N or +N to scale up or -N to scale down.

@Duvel
Copy link
Author

Duvel commented Apr 5, 2020

@trane9991 What do you mean Swarm isn't designed for this?

I don't think adding and removing a service is a good idea as you will loose the details and logs of the job. I like them to be available when needed, at least the one after the last run. But this is also valid for container jobs, but I don't use those.

In my experience is scaling higher than 1 a bad idea, as this can lead to false positives. So a dedicated job of running more instances is in my view not necessary. I would now rescale to always 0 and then 1 up. This is what I learned after using it for a while and problems due to glitches in Docker.

Let my try to explain, why I created this feature. I always use swarm and stacks to deploy my applications, because of the services. In my stack file I also have the jobs, that need to be run when necessary, like a backup, restore or a bulk import. This way I have everything related to the proper working of an application in one file. The jobs are defined with 0 instances and a no restart policy. This now allows me to run a job manually with a simple scale up and down. Running the job with ofelia does it in exactly the same way, so I don't need to define the job in 2 different ways. Because of the stack the volumes, networks, configs and secrets are also defined.

When writing this, I don't really find the name job-service-run confusing and the only difference is that the service already exists.

@taraspos
Copy link
Collaborator

taraspos commented Apr 5, 2020

When I said that "Swarm isn't designed for this" I mean, scaling up/down service to execute the one-off task.

For your use-case, it looks completely fine (btw, are you currently using your fork of Ofelia to do this?), however, I'm a bit afraid that other people may misunderstand what this does.

Service job-service-run has following configurations:

[job-service-run "service-executed-on-new-container"]
schedule = 0,20,40 * * * *
image = ubuntu
network = swarm_network
command =  touch /tmp/example

In the case of "existing service", configurations image, network and command are ignored. People less familiar with how Swarm Services works and how "existing service job" is implemented may be under the impression, that this somehow will run a container with provided configurations as part of Service, while in reality scale up/down will happen and image, network and command will be ignored. Also, they may have Restart Policy configured on their Services, which will lead to multiple executions of the same job, which could be harmful.

I appreciate work that you did and understand that current implementation is convenient for your workflow, however not sure if it will be generic enough to fit every use case. Also, I wanna try to make "jobs" as simple as possible without hidden surprises.

@Duvel
Copy link
Author

Duvel commented Apr 5, 2020

Yup, I have been running a build of my fork for a few years now.

Personally I don't think job-service-run is a good term at all now, when I looked into the current documentation. When looking in the new jobs.md apparently the current ability isn't clear as well. I think currently it creates a new service and isn't able to run anything in an existing service.

Maybe it would be better to have something called like job-new-service-run and job-existing-service-run.

And for the fear of running double jobs, we can always do extra checks to prevent errors by users. As I mentioned always scaling to 0 is a good way to go. We can check as well that that there are no current running tasks of a service and we also can check if the right restart policy is set. But I guess it probably will always be a more advanced option, but it also allows for more possibilities like volumes and environment variables.

@taraspos
Copy link
Collaborator

taraspos commented Apr 5, 2020

Yes, you are right about jobs.md, there is a mistake.
Currently, it can run only a new service.

I would avoid renaming of existing commands because I think most of the users use ofelia:latest docker image and that could break setup for some.

I would like to do versioned releases but I do not have permissions for Docker Hub, so only automated build works now which always points to latest.

@Duvel
Copy link
Author

Duvel commented Apr 5, 2020

Then I would suggest we introduce a new command for the existing service.

Maybe we could deprecate the current command and put a log notice, that is has been deprecated? But if you decide to provide both options, I think they need to be easily distinguishable and that you don't want the old command forever.

@4n70w4
Copy link

4n70w4 commented Aug 19, 2020

Hi! Any news?

@taraspos
Copy link
Collaborator

Hey @4n70w4 , this went for a halt for now :/

@4n70w4
Copy link

4n70w4 commented Aug 21, 2020

Very sad. It turns out that I can't use an existing container and I can't configure (volumes, environments) the container through job-service-run. This product is not suitable for swarm mode. (((

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants