Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

timeouts does not apply all the time #1169

Open
miberecz opened this issue Mar 27, 2024 · 2 comments
Open

timeouts does not apply all the time #1169

miberecz opened this issue Mar 27, 2024 · 2 comments
Labels
🐛 bug Something isn't working

Comments

@miberecz
Copy link

Describe the bug
I use the provider through the Pulumi wrapper where I noticed an issue. Eventually we tracked down thats its an upstream issue:
muhlba91/pulumi-proxmoxve#266

If a timeout value set (e.g. timeout_shutdown_vm ) its applies for the first time, but not after.

To Reproduce
Steps to reproduce the behavior:

  1. Create a simple VM resource
  2. terraform apply
  3. After 180 sec, timeout occurs as it should:
Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

proxmox_virtual_environment_vm.timeouttester9000: Destroying... [id=1301]
proxmox_virtual_environment_vm.timeouttester9000: Still destroying... [id=1301, 10s elapsed]
proxmox_virtual_environment_vm.timeouttester9000: Still destroying... [id=1301, 20s elapsed]
proxmox_virtual_environment_vm.timeouttester9000: Still destroying... [id=1301, 30s elapsed]
proxmox_virtual_environment_vm.timeouttester9000: Still destroying... [id=1301, 40s elapsed]
proxmox_virtual_environment_vm.timeouttester9000: Still destroying... [id=1301, 50s elapsed]
proxmox_virtual_environment_vm.timeouttester9000: Still destroying... [id=1301, 1m0s elapsed]
proxmox_virtual_environment_vm.timeouttester9000: Still destroying... [id=1301, 1m10s elapsed]
proxmox_virtual_environment_vm.timeouttester9000: Still destroying... [id=1301, 1m20s elapsed]
proxmox_virtual_environment_vm.timeouttester9000: Still destroying... [id=1301, 1m30s elapsed]
proxmox_virtual_environment_vm.timeouttester9000: Still destroying... [id=1301, 1m40s elapsed]
proxmox_virtual_environment_vm.timeouttester9000: Still destroying... [id=1301, 1m50s elapsed]
proxmox_virtual_environment_vm.timeouttester9000: Still destroying... [id=1301, 2m0s elapsed]
proxmox_virtual_environment_vm.timeouttester9000: Still destroying... [id=1301, 2m10s elapsed]
proxmox_virtual_environment_vm.timeouttester9000: Still destroying... [id=1301, 2m20s elapsed]
proxmox_virtual_environment_vm.timeouttester9000: Still destroying... [id=1301, 2m30s elapsed]
proxmox_virtual_environment_vm.timeouttester9000: Still destroying... [id=1301, 2m40s elapsed]
proxmox_virtual_environment_vm.timeouttester9000: Still destroying... [id=1301, 2m50s elapsed]
╷
│ Error: error waiting for VM shutdown: error retrieving task status: received an HTTP 599 response - Reason: Too many redirections
  1. Run terraform apply one more time and its not considered again:
Plan: 1 to add, 0 to change, 1 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

proxmox_virtual_environment_vm.timeouttester9000: Destroying... [id=1301]
proxmox_virtual_environment_vm.timeouttester9000: Still destroying... [id=1301, 10s elapsed]
proxmox_virtual_environment_vm.timeouttester9000: Still destroying... [id=1301, 20s elapsed]
proxmox_virtual_environment_vm.timeouttester9000: Still destroying... [id=1301, 30s elapsed]
proxmox_virtual_environment_vm.timeouttester9000: Still destroying... [id=1301, 40s elapsed]
proxmox_virtual_environment_vm.timeouttester9000: Still destroying... [id=1301, 50s elapsed]
proxmox_virtual_environment_vm.timeouttester9000: Still destroying... [id=1301, 1m0s elapsed]
proxmox_virtual_environment_vm.timeouttester9000: Still destroying... [id=1301, 1m10s elapsed]
proxmox_virtual_environment_vm.timeouttester9000: Still destroying... [id=1301, 1m20s elapsed]
proxmox_virtual_environment_vm.timeouttester9000: Still destroying... [id=1301, 1m30s elapsed]
proxmox_virtual_environment_vm.timeouttester9000: Still destroying... [id=1301, 1m40s elapsed]
proxmox_virtual_environment_vm.timeouttester9000: Still destroying... [id=1301, 1m50s elapsed]
proxmox_virtual_environment_vm.timeouttester9000: Still destroying... [id=1301, 2m0s elapsed]
proxmox_virtual_environment_vm.timeouttester9000: Still destroying... [id=1301, 2m10s elapsed]
proxmox_virtual_environment_vm.timeouttester9000: Still destroying... [id=1301, 2m20s elapsed]
proxmox_virtual_environment_vm.timeouttester9000: Still destroying... [id=1301, 2m30s elapsed]
proxmox_virtual_environment_vm.timeouttester9000: Still destroying... [id=1301, 2m40s elapsed]
proxmox_virtual_environment_vm.timeouttester9000: Still destroying... [id=1301, 2m50s elapsed]
proxmox_virtual_environment_vm.timeouttester9000: Still destroying... [id=1301, 3m0s elapsed]
proxmox_virtual_environment_vm.timeouttester9000: Still destroying... [id=1301, 3m10s elapsed]
proxmox_virtual_environment_vm.timeouttester9000: Still destroying... [id=1301, 3m20s elapsed]
proxmox_virtual_environment_vm.timeouttester9000: Still destroying... [id=1301, 3m30s elapsed]
proxmox_virtual_environment_vm.timeouttester9000: Still destroying... [id=1301, 3m40s elapsed]
proxmox_virtual_environment_vm.timeouttester9000: Still destroying... [id=1301, 3m50s elapsed]
proxmox_virtual_environment_vm.timeouttester9000: Still destroying... [id=1301, 4m0s elapsed]
proxmox_virtual_environment_vm.timeouttester9000: Still destroying... [id=1301, 4m10s elapsed]
proxmox_virtual_environment_vm.timeouttester9000: Still destroying... [id=1301, 4m20s elapsed]
proxmox_virtual_environment_vm.timeouttester9000: Still destroying... [id=1301, 4m30s elapsed]
proxmox_virtual_environment_vm.timeouttester9000: Still destroying... [id=1301, 4m40s elapsed]
proxmox_virtual_environment_vm.timeouttester9000: Still destroying... [id=1301, 4m50s elapsed]
proxmox_virtual_environment_vm.timeouttester9000: Still destroying... [id=1301, 5m0s elapsed]
proxmox_virtual_environment_vm.timeouttester9000: Still destroying... [id=1301, 5m10s elapsed]
proxmox_virtual_environment_vm.timeouttester9000: Still destroying... [id=1301, 5m20s elapsed]
proxmox_virtual_environment_vm.timeouttester9000: Still destroying... [id=1301, 5m30s elapsed]
proxmox_virtual_environment_vm.timeouttester9000: Still destroying... [id=1301, 5m40s elapsed]
proxmox_virtual_environment_vm.timeouttester9000: Still destroying... [id=1301, 5m50s elapsed]
proxmox_virtual_environment_vm.timeouttester9000: Still destroying... [id=1301, 6m0s elapsed]
proxmox_virtual_environment_vm.timeouttester9000: Still destroying... [id=1301, 6m10s elapsed]
proxmox_virtual_environment_vm.timeouttester9000: Still destroying... [id=1301, 6m20s elapsed]
proxmox_virtual_environment_vm.timeouttester9000: Still destroying... [id=1301, 6m30s elapsed]
proxmox_virtual_environment_vm.timeouttester9000: Still destroying... [id=1301, 6m40s elapsed]
proxmox_virtual_environment_vm.timeouttester9000: Still destroying... [id=1301, 6m50s elapsed]
proxmox_virtual_environment_vm.timeouttester9000: Still destroying... [id=1301, 7m0s elapsed]
proxmox_virtual_environment_vm.timeouttester9000: Still destroying... [id=1301, 7m10s elapsed]
proxmox_virtual_environment_vm.timeouttester9000: Still destroying... [id=1301, 7m20s elapsed]
proxmox_virtual_environment_vm.timeouttester9000: Still destroying... [id=1301, 7m30s elapsed]
proxmox_virtual_environment_vm.timeouttester9000: Still destroying... [id=1301, 7m40s elapsed]
proxmox_virtual_environment_vm.timeouttester9000: Still destroying... [id=1301, 7m51s elapsed]
proxmox_virtual_environment_vm.timeouttester9000: Still destroying... [id=1301, 8m1s elapsed]
proxmox_virtual_environment_vm.timeouttester9000: Still destroying... [id=1301, 8m11s elapsed]
proxmox_virtual_environment_vm.timeouttester9000: Still destroying... [id=1301, 8m21s elapsed]
proxmox_virtual_environment_vm.timeouttester9000: Still destroying... [id=1301, 8m31s elapsed]
proxmox_virtual_environment_vm.timeouttester9000: Still destroying... [id=1301, 8m41s elapsed]
proxmox_virtual_environment_vm.timeouttester9000: Still destroying... [id=1301, 8m51s elapsed]
proxmox_virtual_environment_vm.timeouttester9000: Still destroying... [id=1301, 9m1s elapsed]
proxmox_virtual_environment_vm.timeouttester9000: Still destroying... [id=1301, 9m11s elapsed]
^C
Interrupt received.
Please wait for Terraform to exit or data loss may occur.
Gracefully shutting down...

Stopping operation...
╷
│ Error: error waiting for VM shutdown: error retrieving task status: failed to perform HTTP GET request (path: nodes/test-devops-proxmox02/tasks/UPID:test-devops-proxmox02:000425B4:063B204F:66017E41:qmshutdown:1301:root@pam:/status) - Reason: Get "https://10.10.119.185:8006/api2/json/nodes/test-devops-proxmox02/tasks/UPID:test-devops-proxmox02:000425B4:063B204F:66017E41:qmshutdown:1301:root@pam:/status": context canceled
│ 
│ 
╵
╷
│ Error: execution halted
│ 
│ 
╵
╷
│ Error: execution halted
│ 

Please also provide a minimal Terraform configuration that reproduces the issue.

resource "proxmox_virtual_environment_vm" "timeouttester9000" {
  bios          = "seabios"
  vm_id         = "1301"
  name          = "timeouttester9000"
  scsi_hardware = "virtio-scsi-pci"
  node_name     = "test-devops-proxmox02"
  timeout_clone = 180
  timeout_create = 180
  timeout_move_disk = 180
  timeout_migrate = 180
  timeout_reboot = 180
  timeout_shutdown_vm = 180
  timeout_start_vm = 180
  timeout_stop_vm = 180

  agent {
    timeout = "3m"
  }
  cpu {
    cores   = 3
    sockets = 1
    type    = "qemu64"
  }
...

Expected behavior
Timeouts are consistent across runs.

  • Single or clustered Proxmox: Clustered
  • Provider version (ideally it should be the latest version): 0.50.0
  • Terraform version: 1.7.5
  • OS (where you run Terraform from): WSL Ubuntu 22.04
  • Debug logs (TF_LOG=DEBUG terraform apply):
@miberecz miberecz added the 🐛 bug Something isn't working label Mar 27, 2024
@bpg
Copy link
Owner

bpg commented Mar 27, 2024

Hi @miberecz 👋🏼

Sorry, I don't fully understand your use case. Are you trying to delete the existing VM? Or applying a change that causes a VM to be re-created (hence there is see "destroying..." in the output)?
Is the VM running when you call apply second time?
Is the VM cloned from a template, or a standalone one?
You don't have agent enabled, was it on purpose to trigger the timeout?

There are many different timeout in the code, and it is important to understand the use case to identify which one (or combination of them) do not work as expected.
Could you please provide a bit more details?

@miberecz
Copy link
Author

miberecz commented Mar 28, 2024

Yes, sorry if I wasn't clear enough.
I'm updating an existing VM in this example (CPU core count from 2 to 3 but can be any resource value)
Yes, the VM is running the second time.
Its a VM cloned from a template.
Agent is disabled to trigger timeout.
The original issue was that our network has problems sometimes and we have failed Pulumi runs because of that. Its really hard to do a re-try if you have to wait 1800 Sec in case of an issue.
So I lowered the timeouts when I noticed the problem. To simulate a network issue, I just disable the Agent because I noticed that the behavior is the same in both cases.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🐛 bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants