-
Notifications
You must be signed in to change notification settings - Fork 576
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
After the boot 3 schedule fails, the deployer information is missing upon restart. #5777
Comments
This does seem to be a bug. |
@alsdud154 Was the deployer property in question visible in the task execution view? |
@corneil Thanks for your reply task information executed by schedule[Job Exection Id 151] task information for failed jobs using the restart button[Job Execution Id 152] Spring batch applications running with k8s include org.springframework.cloud:spring-cloud-starter-task. |
SCDF creates a task manifest when launching a task that stores the deployment properties. However, when SCDf schedules a task the deployment information is passed onto the cronjob but dataflow does not store the deployment information. Thus when the scheduled task is fails, and needs to be restarted, SCDF does not have the deployment information, because it does not have the manifest. SCDF needs to be updated to store this manifest (deployment information) for the schedule. A possible workaround is to use the global properties (if your tasks use the same deployment properties: https://docs.spring.io/spring-cloud-dataflow/docs/current/reference/htmlsingle/#configuration-kubernetes-app-props |
@cppwfs thank you. |
You are running DataFlow with Helm Chart version 26.8.1 [App version: 2.11.2].
You are using a data flow to make a spring batch a task.
Run the task uses the schedule.
Set the deployer volume in the scheduler.main.properties when you create the schedule.
The operation was executed by the schedule, but the operation failed.
The failed operation was re-run by pressing the Restart button.
At this point, the newly executed task is executed by omitting the deployer volume value set in the schedule.
I think it's a bug.
This problem does not occur if you run a failed job with "LAUCH TASK" instead of a schedule and then run it again.
Please let me know how to solve it.
Schedule information
job information executed by schedule
Run failed jobs with the Restart button
k8s pod information for job executed by schedule
k8s pod information for failed jobs using the restart button
release version
The text was updated successfully, but these errors were encountered: