Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Compute build revision in KafkaConnectAssemblyOperatorPodSetTest #10113

Merged
merged 1 commit into from
May 21, 2024

Conversation

urbandan
Copy link
Contributor

@urbandan urbandan commented May 15, 2024

Type of change

  • Refactoring

Description

On each Kafka version change, the KafkaConnectAssemblyOperatorPodSetTest needs a manual update due to a hard-coded hash. This change updates the test to compute the expected hash, meaning that the test won't need a manual update on each version change.

Checklist

Please go through this checklist and make sure all applicable tasks have been done

  • Write tests
  • Make sure all tests pass
  • Update documentation
  • Check RBAC rights for Kubernetes / OpenShift roles
  • Try your changes from Pod inside your Kubernetes and OpenShift cluster, not just locally
  • Reference relevant issue(s) and close them after merging
  • Update CHANGELOG.md
  • Supply screenshots for visual changes, such as Grafana dashboards

@scholzj
Copy link
Member

scholzj commented May 15, 2024

Thanks for the PR. Maybe you should fill in the PR description to explain why this is good and useful change?

The main advantage of the hardcoded value is that it checks that the revision hash is stable and not changing randomly. Having it calculated automatically in the same way as the operator does it makes the test dependent on the operator implementation. But maybe more important, it also hides potential issues and makes it questionable of what value does testing it actually add.

@urbandan
Copy link
Contributor Author

@scholzj Thank you for your comment, updated the description.
My focus was on simplifying the version bump process, this seems to be the one part which is complex to programmatically update. What is the current (manual) process when setting this hard-coded value?
Would it improve this computation if the dockerfile was also generated separately by the test, and the hash was computed on top of that?

@scholzj
Copy link
Member

scholzj commented May 15, 2024

Yes, it needs to be updated sometimes. But it is important to make sure that the hash is always the same when nothing changes. And one of the ways to test it is to have the hardcoded hash instead of calculating it. That way:

  • It makes you as well as the reviewers actually think if it really changed and if the change of the hash is justified
  • You make sure there is no randomness

If you do not want to change it ... have you tried to just configure the .spec.image for the test? That will make sure the base image in the FROM section of the Dockerfile is always the same and I think that is what usually causes the Dockerfile to change when adding a new Kafka version.

Or is there something else you meant with version bump process?

@urbandan
Copy link
Contributor Author

urbandan commented May 15, 2024

By the "version bump process" I mean the part when the default kafka version changes. I had to execute that process on a fork, and found that most changes can be easily automated (e.g. updating examples), except this one.
The reason I'm asking "what is the current process" is that in my case, I copied the new hash value printed by the failing test. That isn't much better than having a computed value.

So if the computation is problematic since it's using the same logic as the code itself, then maybe if the test contained a "template" docker file, where the kafka version can be replaced easily, then generate the expected hash based on that, we somewhat duplicate the code, but reduce the risk of the code and the test going wrong at the same time. And with this solution we wouldn't need to manually update the hash when the kafka version changes. Does that sound acceptable?

@scholzj
Copy link
Member

scholzj commented May 15, 2024

By the "version bump process" I mean that part when the default kafka version changes. I had to execute that on a fork, and found that most changes can be easily automated, except this one.
The reason I'm asking what is the current process is that in my case, I copied the new hash value printed by the failing test. That isn't much better than having a computed value.

Right, so I think for that, you can just set the .spec.image in the KafkaConnect resource which will override the FROM command and I think that should be it, or? Something like:

    private static final KafkaConnect CONNECT = new KafkaConnectBuilder()
            .withNewMetadata()
                .withName(NAME)
                .withNamespace(NAMESPACE)
            .endMetadata()
            .withNewSpec()
                .withImage("my-base-image:latest")
                .withReplicas(3)
            .endSpec()
            .build();
    private static final KafkaConnectCluster CLUSTER = KafkaConnectCluster.fromCrd(RECONCILIATION, CONNECT, VERSIONS, SHARED_ENV_PROVIDER);

That should make it independent on the Kafka version but allow to keep the static value.

…BuildChangedCluster

By setting the base image, the test won't need an update when the default Kafka version changes.

Signed-off-by: Daniel Urban <urb.daniel7@gmail.com>
@urbandan
Copy link
Contributor Author

Thank you - made the change. I thought that testing with the default Kafka version was the intention, but if not, it makes sense.

@scholzj
Copy link
Member

scholzj commented May 15, 2024

I thought that testing with the default Kafka version was the intention, but if not, it makes sense.

I think that is more a side-efect of the default base image being based on the default Kafka version, not the intention. I think this change should work well -> it makes sure the hash does not change and makes it independent on the Kafka version. Thanks.

Copy link
Member

@scholzj scholzj left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Thanks.

@scholzj scholzj requested a review from ppatierno May 15, 2024 13:41
@scholzj scholzj added this to the 0.42.0 milestone May 15, 2024
Copy link
Member

@ppatierno ppatierno left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Thanks!

@scholzj scholzj merged commit 5810433 into strimzi:main May 21, 2024
13 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants