Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: Eventually() missing Should() statement and sync error #11101

Merged
merged 1 commit into from Feb 3, 2024

Conversation

jcanocan
Copy link
Contributor

@jcanocan jcanocan commented Jan 29, 2024

What this PR does / why we need it:
The linter enforces the usage of Should() statements when a Eventually check is used.
It adds the missing Eventually() statements Should() checks.

Also, DeferCleanup has been dropped in favor of defer. The resetToDefaultConfig() is called before the test DeferCleanup, creating the following error:

  [FAILED] in [AfterEach] - tests/utils.go:1601 @ 01/29/24 15:32:28.576
  << Timeline
  [FAILED] Timed out after 10.001s.
  Unexpected error:
      <*errors.errorString | 0xc006203230>: 
      resource & config versions (5548 and 4736 respectively) are not as expected. component: "virt-handler", pod: "virt-handler-zdv7f" 
      {
          s: "resource & config versions (5548 and 4736 respectively) are not as expected. component: \"virt-handler\", pod: \"virt-handler-zdv7f\" ",
      }
  occurred
  In [AfterEach] at: tests/utils.go:1601 @ 01/29/24 15:32:28.576
  Full Stack Trace
    kubevirt.io/kubevirt/tests.resetToDefaultConfig()
        tests/utils.go:1601 +0x85
    kubevirt.io/kubevirt/tests.TestCleanup()
        tests/utils.go:113 +0x6f
    kubevirt.io/kubevirt/tests_test.glob..func21()
        tests/tests_suite_test.go:108 +0xf

This is because the virt-handler will not be ready (intentionally for the purposes of the test), and the resetToDefaultConfig() will force the virt-handler to reconcile, which will fail, but the virt-handler resourceVersion will be updated. Therefore, the Kubevirt object will be out of sync.

Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged):
Fixes #

Special notes for your reviewer:

Checklist

This checklist is not enforcing, but it's a reminder of items that could be relevant to every PR.
Approvers are expected to review this list.

Release note:

NONE

@kubevirt-bot kubevirt-bot added release-note-none Denotes a PR that doesn't merit a release note. dco-signoff: yes Indicates the PR's author has DCO signed all their commits. size/XS labels Jan 29, 2024
Copy link
Member

@xpivarc xpivarc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/approve

@kubevirt-bot kubevirt-bot added the lgtm Indicates that a PR is ready to be merged. label Jan 29, 2024
@kubevirt-bot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: xpivarc

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@kubevirt-bot kubevirt-bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Jan 29, 2024
@kubevirt-commenter-bot
Copy link

Required labels detected, running phase 2 presubmits:
/test pull-kubevirt-e2e-kind-1.27-sriov
/test pull-kubevirt-e2e-k8s-1.29-ipv6-sig-network
/test pull-kubevirt-e2e-k8s-1.27-sig-network
/test pull-kubevirt-e2e-k8s-1.27-sig-storage
/test pull-kubevirt-e2e-k8s-1.27-sig-compute
/test pull-kubevirt-e2e-k8s-1.27-sig-operator
/test pull-kubevirt-e2e-k8s-1.28-sig-network
/test pull-kubevirt-e2e-k8s-1.28-sig-storage
/test pull-kubevirt-e2e-k8s-1.28-sig-compute
/test pull-kubevirt-e2e-k8s-1.28-sig-operator

@xpivarc
Copy link
Member

xpivarc commented Jan 29, 2024

/hold
flakes

@kubevirt-bot kubevirt-bot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Jan 29, 2024
@kubevirt-bot kubevirt-bot added size/S and removed lgtm Indicates that a PR is ready to be merged. size/XS labels Jan 29, 2024
@jcanocan jcanocan force-pushed the fix-eventually branch 2 times, most recently from 12ec7ae to 788b5b6 Compare January 29, 2024 16:57
@jcanocan jcanocan changed the title fix: Eventually() missing Should() statement fix: Eventually() missing Should() statement and sync error Jan 29, 2024
tests/vm_test.go Outdated
Comment on lines 1982 to 1989
// FIXME: this is just a test to see if the flakiness is reduced
migrationBandwidth := resource.MustParse("1Mi")
kv := util.GetCurrentKv(virtClient)
kv.Spec.Configuration.MigrationConfiguration = &v1.MigrationConfiguration{
BandwidthPerMigration: &migrationBandwidth,
}
kv = testsuite.UpdateKubeVirtConfigValue(kv.Spec.Configuration)
tests.WaitForConfigToBePropagatedToComponent("kubevirt.io=virt-handler", kv.ResourceVersion, tests.ExpectResourceVersionToBeLessEqualThanConfigVersion, 120*time.Second)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jcanocan Please remove this.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As far as I was able to test, this has been the key piece to stabilize the test. I've taken it from this other test:

By("changing a setting and ensuring that the config update watcher eventually resumes and picks it up")
migrationBandwidth := resource.MustParse("1Mi")
kv := util.GetCurrentKv(virtCli)
kv.Spec.Configuration.MigrationConfiguration = &k6sv1.MigrationConfiguration{
BandwidthPerMigration: &migrationBandwidth,
}
kv = testsuite.UpdateKubeVirtConfigValue(kv.Spec.Configuration)
tests.WaitForConfigToBePropagatedToComponent("kubevirt.io=virt-handler", kv.ResourceVersion, tests.ExpectResourceVersionToBeLessEqualThanConfigVersion, 60*time.Second)
})

Dropped the FIXME comment.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will give the flakes line a couple of extra runs to be sure.

tests/vm_test.go Outdated

DeferCleanup(func() {
defer func() {
libpod.DeleteKubernetesApiBlackhole(getHandlerNodePod(virtClient, nodeName), componentName)
Copy link
Member

@EdDev EdDev Jan 30, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These kind of hacks are causing the whole tests suite to collapse on itself.

There is nothing safe that can be done to fix something like this and I hope to see this whole test removed.

But this is not really related to this fix. I just think the fix is to remove it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Dropped the usage of defer for this purpose. I agree, it's not elegant. Thanks for the input.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was actually referring to hacking the virt-handler network.

It is indeed nicer without the defer, but what actually is done here is very risky and IMO does not fit our tests.
If for some reason the pod state is not fixed (e.g. the route is not deleted), we are left with a broken cluster and following tests will flake in a hard to understand manner.

IMO we should not have such tests around, we cannot cover every scenario in a reasonable safe manner.
Our resources are limited, tests are long and flaky enough not to risk them further.
If someone thinks it is worth the resources and risk, it would be better to allocate a dedicate job for such tests. I think reality dictates that we need to manage our e2e tests and focus on the main functionality and leave the edge scenarios to the community to detect (reporting bugs, running more tests on D/S projects and contributing back).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is precedence for destructive tests, if we want to run them on separate lanes or not have them at all, the decision should be made in wider consensus + maintainers.

I am not sure what is better about the AfterEach, other than it is wrong afaik.

Note: I don't agree. This is not a conformance test and if anything this does fail loudly!

Copy link
Member

@EdDev EdDev Jan 30, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, I disagree with the presence of cluster-level destructive tests and I reasoned about it in the prev message. The fact that they exist is not a sign of them being good for the overall project.

This is not a conformance test and if anything this does fail loudly!

If tests like this fail, they either mess up all following tests or destabilize the cluster in ways we may not predict.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is precedence for destructive tests, if we want to run them on separate lanes or not have them at all, the decision should be made in wider consensus + maintainers.

Well, I disagree with the presence of cluster-level destructive tests and I reasoned about it in the prev message. The fact that they exist is not a sign of them being good for the overall project.

This is not a conformance test and if anything this does fail loudly!

If tests like this fail, they either mess up all following tests or destabilize the cluster in ways we may not predict.

We have no rush to get this PR merged and from my point of view these observations are worth to be discussed like @xpivarc proposed. What would be a good way of shifting this discussion away from the PR to a wider forum?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will be drafting a proposal for the wider kubevirt community to move these kind of destructive and edge case tests to their own separate test lane and out of the gating lanes. We are currently testing everything in the gating lanes including severe edge cases which does not make sense and makes CI very costly.

We will look at how to separate out these tests and where we want to run them.

@@ -1969,12 +1969,25 @@ status:

By("Blocking virt-handler from reconciling the VMI")
libpod.AddKubernetesApiBlackhole(getHandlerNodePod(virtClient, nodeName), componentName)
Eventually(getHandlerNodePod(virtClient, nodeName).Items[0], 120*time.Second, time.Second, HaveConditionFalse(k8sv1.PodReady))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about?

Eventually(func(g Gomega) {
	g.Expect(getHandlerNodePod(virtClient, nodeName).Items[0]).To(HaveConditionFalse(k8sv1.PodReady))
}, 120*time.Second, time.Second).Should(Succeed())

DeferCleanup(func() {
	libpod.DeleteKubernetesApiBlackhole(getHandlerNodePod(virtClient, nodeName), componentName)
	Eventually(func(g Gomega) {
		g.Expect(getHandlerNodePod(virtClient, nodeName).Items[0]).To(HaveConditionTrue(k8sv1.PodReady))
	}, 120*time.Second, time.Second).Should(Succeed())
})

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The issue with DeferCleanup is that it is executed after the resetToDefaultConfig() inside the global AfterEach. So when the AfterEach tries to reset kubevirt, the virt-handler is offline (unable to communicate with the kubernetes API). That creates this error: "resource & config versions (5548 and 4736 respectively) are not as expected. component: \"virt-handler\", pod: \"virt-handler-zdv7f\" " .

I like the Eventually proposal, looks elegant. Also implemented in the blackhole deleting check.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why did we revert the defer or JustAfterEach?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't get it.
The defer is something that we can avoid as solution to cleanup. The JustAfterEach is generally used for different scopes.
Here, the problem was the DeferCleanup. After some searches I found this onsi/ginkgo#1284 (comment)

In reality Ginkgo runs the After* family followed by the Defer* family.

which explain the source of the issue.
(We should check our other DeferCleanup usages IMHO).
Can you explain your doubts?
Thanks

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jcanocan also observed that an AfterEach is only run after the global AfterEach afaik.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @fossedihelm for this nice piece of information. This explains all the issues faced.

As far as I was able to test, the nearest AfterEach (the one inside the Context) is always executed before the global AfterEach. However, we need a way to wait until the handler and kubevirt versions are in sync. That's why I've observed in the past the error mentioned in the description. Sorry for the confusion.

tests/vm_test.go Outdated
@@ -1957,24 +1957,37 @@ status:

Context("[Serial] when node becomes unhealthy", Serial, func() {
const componentName = "virt-handler"
var nodeName string

JustAfterEach(func() {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AfterEach is enough here. Inner AfterEach are executed before the outer one

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

The linter enforces the usage of `Should()` statements when a
`Eventually` check is used.

Also, `DeferCleanup` has been dropped in favor of `AfterEach`. The
[`resetToDefaultConfig()`
](https://github.com/kubevirt/kubevirt/blob/cb1b6e53540189d6664c4a8c126ab6e0a84ff8c4/tests/utils.go#L1842)
is called before the test `DeferCleanup`, creating the following error:
"resource & config versions (5548 and 4736 respectively) are not as
expected. component: \"virt-handler\", pod: \"virt-handler-zdv7f\" "

This is because the `virt-handler` will not be ready (intentionally for
the purposes of the test), and the `resetToDefaultConfig()` will force
the `virt-handler` to reconcile, which will fail, but the `virt-handler`
`resourceVersion` will be updated. Therefore, the Kubevirt object will
be out of sync.

Signed-off-by: Javier Cano Cano <jcanocan@redhat.com>
@jcanocan
Copy link
Contributor Author

Dropped the configuration update in the AfterEach to check if the WaitForConfigToBePropagatedToComponent() is enough to sync Kubevirt and virt-handler.

@fossedihelm
Copy link
Contributor

/lgtm
Thanks!
@xpivarc would you like to take a further look before unhold?

@kubevirt-bot kubevirt-bot added the lgtm Indicates that a PR is ready to be merged. label Jan 30, 2024
@kubevirt-commenter-bot
Copy link

Required labels detected, running phase 2 presubmits:
/test pull-kubevirt-e2e-kind-1.27-sriov
/test pull-kubevirt-e2e-k8s-1.29-ipv6-sig-network
/test pull-kubevirt-e2e-k8s-1.27-sig-network
/test pull-kubevirt-e2e-k8s-1.27-sig-storage
/test pull-kubevirt-e2e-k8s-1.27-sig-compute
/test pull-kubevirt-e2e-k8s-1.27-sig-operator
/test pull-kubevirt-e2e-k8s-1.28-sig-network
/test pull-kubevirt-e2e-k8s-1.28-sig-storage
/test pull-kubevirt-e2e-k8s-1.28-sig-compute
/test pull-kubevirt-e2e-k8s-1.28-sig-operator

Comment on lines +1962 to +1970
AfterEach(func() {
libpod.DeleteKubernetesApiBlackhole(getHandlerNodePod(virtClient, nodeName), componentName)
Eventually(func(g Gomega) {
g.Expect(getHandlerNodePod(virtClient, nodeName).Items[0]).To(HaveConditionTrue(k8sv1.PodReady))
}, 120*time.Second, time.Second).Should(Succeed())

tests.WaitForConfigToBePropagatedToComponent("kubevirt.io=virt-handler", util.GetCurrentKv(virtClient).ResourceVersion,
tests.ExpectResourceVersionToBeLessEqualThanConfigVersion, 120*time.Second)
})
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not flaky?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm trying to find out if this may lead to flaky. The purpose is to wait until configuration versions are in sync. I ran out of ideas in this regard. So I think we should rerun a couple of times the pull-kubevirt-check-tests-for-flakes to find it out. Sorry for the brute force approach 😞

@@ -1969,12 +1969,25 @@ status:

By("Blocking virt-handler from reconciling the VMI")
libpod.AddKubernetesApiBlackhole(getHandlerNodePod(virtClient, nodeName), componentName)
Eventually(getHandlerNodePod(virtClient, nodeName).Items[0], 120*time.Second, time.Second, HaveConditionFalse(k8sv1.PodReady))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why did we revert the defer or JustAfterEach?

@kubevirt-bot kubevirt-bot removed the lgtm Indicates that a PR is ready to be merged. label Jan 30, 2024
@xpivarc
Copy link
Member

xpivarc commented Jan 31, 2024

/retest pull-kubevirt-check-tests-for-flakes

@kubevirt-bot
Copy link
Contributor

@xpivarc: The /retest command does not accept any targets.
The following commands are available to trigger required jobs:

  • /test pull-kubevirt-apidocs
  • /test pull-kubevirt-build
  • /test pull-kubevirt-build-arm64
  • /test pull-kubevirt-check-unassigned-tests
  • /test pull-kubevirt-client-python
  • /test pull-kubevirt-e2e-k8s-1.27-sig-compute
  • /test pull-kubevirt-e2e-k8s-1.27-sig-network
  • /test pull-kubevirt-e2e-k8s-1.27-sig-operator
  • /test pull-kubevirt-e2e-k8s-1.27-sig-performance
  • /test pull-kubevirt-e2e-k8s-1.27-sig-storage
  • /test pull-kubevirt-e2e-k8s-1.28-sig-compute
  • /test pull-kubevirt-e2e-k8s-1.28-sig-network
  • /test pull-kubevirt-e2e-k8s-1.28-sig-operator
  • /test pull-kubevirt-e2e-k8s-1.28-sig-storage
  • /test pull-kubevirt-e2e-k8s-1.29-ipv6-sig-network
  • /test pull-kubevirt-e2e-k8s-1.29-sig-compute
  • /test pull-kubevirt-e2e-k8s-1.29-sig-compute-migrations
  • /test pull-kubevirt-e2e-k8s-1.29-sig-network
  • /test pull-kubevirt-e2e-k8s-1.29-sig-operator
  • /test pull-kubevirt-e2e-k8s-1.29-sig-storage
  • /test pull-kubevirt-e2e-kind-1.27-sriov
  • /test pull-kubevirt-e2e-kind-1.27-vgpu
  • /test pull-kubevirt-e2e-windows2016
  • /test pull-kubevirt-fossa
  • /test pull-kubevirt-generate
  • /test pull-kubevirt-manifests
  • /test pull-kubevirt-prom-rules-verify
  • /test pull-kubevirt-unit-test
  • /test pull-kubevirt-verify-go-mod

The following commands are available to trigger optional jobs:

  • /test build-kubevirt-builder
  • /test pull-kubevirt-build-s390x
  • /test pull-kubevirt-check-tests-for-flakes
  • /test pull-kubevirt-code-lint
  • /test pull-kubevirt-conformance-arm64
  • /test pull-kubevirt-e2e-arm64
  • /test pull-kubevirt-e2e-k8s-1.25-fips-sig-compute
  • /test pull-kubevirt-e2e-k8s-1.29-sig-compute-realtime
  • /test pull-kubevirt-e2e-k8s-1.29-sig-compute-root
  • /test pull-kubevirt-e2e-k8s-1.29-sig-monitoring
  • /test pull-kubevirt-e2e-k8s-1.29-sig-network-multus-v4
  • /test pull-kubevirt-e2e-k8s-1.29-sig-storage-root
  • /test pull-kubevirt-e2e-k8s-1.29-single-node
  • /test pull-kubevirt-e2e-k8s-1.29-swap-enabled
  • /test pull-kubevirt-gosec
  • /test pull-kubevirt-goveralls
  • /test pull-kubevirt-metrics-lint
  • /test pull-kubevirt-unit-test-arm64
  • /test pull-kubevirt-verify-rpms

Use /test all to run the following jobs that were automatically triggered:

  • pull-kubevirt-apidocs
  • pull-kubevirt-build
  • pull-kubevirt-build-arm64
  • pull-kubevirt-check-tests-for-flakes
  • pull-kubevirt-check-unassigned-tests
  • pull-kubevirt-client-python
  • pull-kubevirt-code-lint
  • pull-kubevirt-conformance-arm64
  • pull-kubevirt-e2e-arm64
  • pull-kubevirt-e2e-k8s-1.29-sig-compute
  • pull-kubevirt-e2e-k8s-1.29-sig-compute-migrations
  • pull-kubevirt-e2e-k8s-1.29-sig-network
  • pull-kubevirt-e2e-k8s-1.29-sig-operator
  • pull-kubevirt-e2e-k8s-1.29-sig-storage
  • pull-kubevirt-e2e-kind-1.27-vgpu
  • pull-kubevirt-e2e-windows2016
  • pull-kubevirt-fossa
  • pull-kubevirt-generate
  • pull-kubevirt-goveralls
  • pull-kubevirt-manifests
  • pull-kubevirt-prom-rules-verify
  • pull-kubevirt-unit-test
  • pull-kubevirt-unit-test-arm64
  • pull-kubevirt-verify-go-mod

In response to this:

/retest pull-kubevirt-check-tests-for-flakes

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@xpivarc
Copy link
Member

xpivarc commented Jan 31, 2024

/test pull-kubevirt-check-tests-for-flakes

2 similar comments
@jcanocan
Copy link
Contributor Author

/test pull-kubevirt-check-tests-for-flakes

@jcanocan
Copy link
Contributor Author

/test pull-kubevirt-check-tests-for-flakes

@jcanocan
Copy link
Contributor Author

jcanocan commented Feb 1, 2024

/retest-required

@jcanocan
Copy link
Contributor Author

jcanocan commented Feb 1, 2024

/test pull-kubevirt-check-tests-for-flakes

@jcanocan
Copy link
Contributor Author

jcanocan commented Feb 1, 2024

@xpivarc flakes failed, but no because this test. Please, retest it when you have taken a look.

@xpivarc
Copy link
Member

xpivarc commented Feb 1, 2024

/hold cancel
@fossedihelm

@kubevirt-bot kubevirt-bot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Feb 1, 2024
Copy link
Contributor

@fossedihelm fossedihelm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you!

@kubevirt-bot kubevirt-bot added the lgtm Indicates that a PR is ready to be merged. label Feb 1, 2024
@kubevirt-commenter-bot
Copy link

Required labels detected, running phase 2 presubmits:
/test pull-kubevirt-e2e-windows2016
/test pull-kubevirt-e2e-kind-1.27-vgpu
/test pull-kubevirt-e2e-kind-1.27-sriov
/test pull-kubevirt-e2e-k8s-1.29-ipv6-sig-network
/test pull-kubevirt-e2e-k8s-1.27-sig-network
/test pull-kubevirt-e2e-k8s-1.27-sig-storage
/test pull-kubevirt-e2e-k8s-1.27-sig-compute
/test pull-kubevirt-e2e-k8s-1.27-sig-operator
/test pull-kubevirt-e2e-k8s-1.28-sig-network
/test pull-kubevirt-e2e-k8s-1.28-sig-storage
/test pull-kubevirt-e2e-k8s-1.28-sig-compute
/test pull-kubevirt-e2e-k8s-1.28-sig-operator

@kubevirt-commenter-bot
Copy link

/retest-required
This bot automatically retries required jobs that failed/flaked on approved PRs.
Silence the bot with an /lgtm cancel or /hold comment for consistent failures.

@kubevirt-bot
Copy link
Contributor

kubevirt-bot commented Feb 2, 2024

@jcanocan: The following test failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
pull-kubevirt-check-tests-for-flakes e43402d link false /test pull-kubevirt-check-tests-for-flakes

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@kubevirt-commenter-bot
Copy link

/retest-required
This bot automatically retries required jobs that failed/flaked on approved PRs.
Silence the bot with an /lgtm cancel or /hold comment for consistent failures.

1 similar comment
@kubevirt-commenter-bot
Copy link

/retest-required
This bot automatically retries required jobs that failed/flaked on approved PRs.
Silence the bot with an /lgtm cancel or /hold comment for consistent failures.

@kubevirt-bot kubevirt-bot merged commit 2dab5d4 into kubevirt:main Feb 3, 2024
35 of 36 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. dco-signoff: yes Indicates the PR's author has DCO signed all their commits. lgtm Indicates that a PR is ready to be merged. release-note-none Denotes a PR that doesn't merit a release note. size/S
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

8 participants