Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add test for check_otel_dependencies #25519

Open
wants to merge 5 commits into
base: main
Choose a base branch
from

Conversation

liustanley
Copy link
Contributor

What does this PR do?

Adds a new directory test/otel and invoke task check_otel_dependencies for checking datadog-agent modules with our otel collector components.

Motivation

We want to add a CI check for the following:

  • Verifying that building the datadogconnector and datadogexporter with a dev branch of datadog-agent passes without errors (this errors when a breaking change is made)
  • Verifying that the go version from this go build matches the go version upstream so that we can continue to import datadog-agent modules

We would like this CI check to require a review from team/opentelemetry in order to merge the PR, so that we can keep track of breaking changes that would affect upstream.

Additional Notes

We would like this invoke task / CI check to run with go 1.21.0, but I believe the default version for CI is go 1.21.9. Is there a way to override this for this test?

Possible Drawbacks / Trade-offs

Describe how to test/QA your changes

@pr-commenter
Copy link

pr-commenter bot commented May 10, 2024

Test changes on VM

Use this command from test-infra-definitions to manually test this PR changes on a VM:

inv create-vm --pipeline-id=34065856 --os-family=ubuntu

@pr-commenter
Copy link

pr-commenter bot commented May 10, 2024

Regression Detector

Regression Detector Results

Run ID: 940e2dc7-e99a-46bc-ac10-1c2498e9ae2a
Baseline: 49f6e7f
Comparison: 89fd28c

Performance changes are noted in the perf column of each table:

  • ✅ = significantly better comparison variant performance
  • ❌ = significantly worse comparison variant performance
  • ➖ = no significant change in performance

No significant changes in experiment optimization goals

Confidence level: 90.00%
Effect size tolerance: |Δ mean %| ≥ 5.00%

There were no significant changes in experiment optimization goals at this confidence level and effect size tolerance.

Experiments ignored for regressions

Regressions in experiments with settings containing erratic: true are ignored.

perf experiment goal Δ mean % Δ mean % CI
file_to_blackhole % cpu utilization -44.56 [-48.81, -40.31]

Fine details of change detection per experiment

perf experiment goal Δ mean % Δ mean % CI
pycheck_1000_100byte_tags % cpu utilization +1.17 [-3.64, +5.98]
basic_py_check % cpu utilization +0.58 [-1.78, +2.94]
file_tree memory utilization +0.52 [+0.42, +0.62]
tcp_syslog_to_blackhole ingress throughput +0.34 [-20.82, +21.50]
trace_agent_msgpack ingress throughput +0.01 [+0.00, +0.03]
trace_agent_json ingress throughput -0.00 [-0.01, +0.01]
tcp_dd_logs_filter_exclude ingress throughput -0.02 [-0.06, +0.02]
uds_dogstatsd_to_api ingress throughput -0.02 [-0.23, +0.18]
otel_to_otel_logs ingress throughput -0.06 [-0.41, +0.29]
idle memory utilization -0.60 [-0.64, -0.57]
uds_dogstatsd_to_api_cpu % cpu utilization -1.10 [-3.92, +1.72]
file_to_blackhole % cpu utilization -44.56 [-48.81, -40.31]

Explanation

A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".

For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:

  1. Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.

  2. Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.

  3. Its configuration does not mark it "erratic".

@liustanley liustanley marked this pull request as ready for review May 13, 2024 13:57
@liustanley liustanley requested review from a team as code owners May 13, 2024 13:57
@liustanley liustanley added this to the 7.55.0 milestone May 13, 2024
@liustanley liustanley added the team/opentelemetry OpenTelemetry team label May 13, 2024
Copy link
Contributor

@ogaca-dd ogaca-dd left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We would like this CI check to require a review from team/opentelemetry in order to merge the PR, so that we can keep track of breaking changes that would affect upstream.

I understand the need to track breaking changes and knowing which PR introduces a breaking change.
Adding this tests implies:

  • If the test is mandatory, it prevents PR to be merged until you fix your code which is not really an option
  • If the test is not mandatory (we tried to limit non mandatory tests), then you will know only the first PR that introduces the breaking changes (after this one the code doesn't compile for every PRs).

Instead, I would suggest using a script that uses git bisect that tries to compile OTEL. Doing so allow you to give the first PR that introduces the change. After fixing your code, if you run again this script, it gives you the second PR that introduces the breaking change and so on.

Note: If you want to be warned, it might be possible to run a special test when a release is promoted.

Copy link
Contributor

@pducolin pducolin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As discussed live, this PR comes with good intentions: preventing breaking changes in a dependent project.

The proposed implementation is not ideal as it push the breaking changes responsibility into datadog-agent. This is per se a breaking change, and would rather deserve a better contract with a smaller surface, like we have for integrations.

The proposed implementation brings potential breaking changes into datadog-agent, being a source of SEV-4 incidents if the test was required to merge a PR, as an issue on otel side would fall into a broken dev pipeline on datadog-agent side. If the test was not required, it would not solve the current issue on otel side, as datadog-agent contributors would still merge their breaking changes into main.

I suggest automating bumps of datadog-agent pinned version in otel triggered from new commit in datadog-agent main branch, so that you can detect as soon as possible any potential breaking change. I would have this test on otel side.

@mx-psi
Copy link
Member

mx-psi commented May 15, 2024

[pushing the breaking changes responsibility into datadog-agent] is per se a breaking change

@pducolin I don't understand how this is a breaking change. What is it breaking? Why is this test considered different from any other test added to datadog-agent?

If the test is mandatory, it prevents PR to be merged until you fix your code which is not really an option

@ogaca-dd Any test we add introduces restrictions on what developers do: that's one of the main things tests do. Why is it different in this case?


I think this test needs to pin the versions against which we test, and we need to have a workflow (dependabot?) to bump said version over time. If that is covered, I don't understand how this test would be different from any other test added to this repository.

@pducolin
Copy link
Contributor

@pducolin I don't understand how this is a breaking change. What is it breaking? Why is this test considered different from any other test added to datadog-agent?

The difference is that we test for something that do not impact directly datadog-agent, but rather something external that datadog-agent do not depend on. If this test was to be broken because of the external code, it would prevent merging changes into datadog-agent. If we had a CVE and the pipeline was broken because of this test, we would be blocked updating datadog-agent, because of something external to datadog-agent blocks it. That would cause a SEV-3. I was not aware there were already such tests within the repo.

@mx-psi
Copy link
Member

mx-psi commented May 16, 2024

If this test was to be broken because of the external code

We can make this impossible by pinning the Collector dependency versions. Would that solve your concerns?

@ogaca-dd
Copy link
Contributor

@ogaca-dd Any test we add introduces restrictions on what developers do: that's one of the main things tests do. Why is it different in this case?

@mx-psi: Imagine I have to change the signature of one function which is used by your repository. How can I merge the PR if the test failed?

Maybe I misunderstand the PR description and what the code does

Verifying that building the datadogconnector and datadogexporter with a dev branch of datadog-agent passes without errors (this errors when a breaking change is made)

Isn't dev branch the branch of an open PR like stanley.liu/add-otel-tests?

@mx-psi
Copy link
Member

mx-psi commented May 16, 2024

Imagine I have to change the signature of one function which is used by your repository. How can I merge the PR if the test failed?

You don't change the function, you instead create a new one and deprecate the existing one. Then the OTel team migrates to the new thing and we remove the deprecated function. This is the usual procedure for any library that has breaking changes. The Agent has been used as a library for the past three years, our customers use it as a library indirectly when building their own custom distributions of the Collector.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants