New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix labels for Service Monitors #2878
Conversation
@IshwarKanse can you check if the behaviour is correct now? The test passed in my cluster but I need another couple of eyes. |
AFAIK, we fixed that here. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IMO rather than creating one service monitor for the self scrape and the prom exporter (which I think are on different ports) we should just make two service monitors. I worry that this is going to break users because it will cause dual scraping
The SM has the label for the monitor Service but not for the collector. So, it doesn't match the collector and the Prometheus exporter is never scrapped. |
…sent Signed-off-by: Israel Blancas <iblancasa@gmail.com>
it seems like e2e tests are failing |
The PR needs to be updated. |
Signed-off-by: Israel Blancas <iblancasa@gmail.com>
Signed-off-by: Israel Blancas <iblancasa@gmail.com>
Signed-off-by: Israel Blancas <iblancasa@gmail.com>
@iblancasa We need to update this test as well e2e-openshift/otlp-metrics-traces |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
approach broadly makes sense, a few thoughts/questions.
Signed-off-by: Israel Blancas <iblancasa@gmail.com>
"params.OtelCol.namespace", params.OtelCol.Namespace, | ||
) | ||
|
||
if !params.OtelCol.Spec.Observability.Metrics.EnableMetrics { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
personal opinion but this statement can be written in a more readable way
if params.OtelCol.Spec.Observability.Metrics.EnableMetrics && params.Config.PrometheusCRAvailability() == prometheus.NotAvailable && params.OtelCol.Spec.Mode == v1beta1.ModeSidecar{
return true
}
l..V(2).Info("PodMonitor will not be created", "Metrics.enabledMetrics", "params.OtelCol.Spec.Observability.Metrics.EnableMetrics ", ....)
return false
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
and I would move the logger creation close to returning false
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why a double condition + no log message is easier to read than the current approach?
The current is easy to follow and understand why the method returns false
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A single log message with all parameters makes it clear to understand the overall state.
The function has 2 states (true
and false
), aligning the condition to it makes it easier to understand
@@ -78,6 +85,25 @@ func ServiceMonitor(params manifests.Params) (*monitoringv1.ServiceMonitor, erro | |||
return &sm, nil | |||
} | |||
|
|||
func shouldCreateServiceMonitor(params manifests.Params) bool { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
personal opinion but this statement can be written in a more readable way
See the previous comment
Signed-off-by: Israel Blancas <iblancasa@gmail.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, minor nitpick about the changelog.
Signed-off-by: Israel Blancas <iblancasa@gmail.com>
@jaronoff97 could you please re-review? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just one thought to prevent this from breaking users
const ( | ||
headlessLabel = "operator.opentelemetry.io/collector-headless-service" | ||
monitoringLabel = "operator.opentelemetry.io/collector-monitoring-service" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@iblancasa i think we should keep these around for now, mark them as deprecated and remove them in two+ versions rather than deleting them outright which would constitute a breaking change.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
Signed-off-by: Israel Blancas <iblancasa@gmail.com>
Signed-off-by: Israel Blancas <iblancasa@gmail.com>
I think it should be fine now. I'm having some problems running this locally due to some network restrictions... |
@iblancasa looks like you missed one test https://github.com/open-telemetry/opentelemetry-operator/actions/runs/9131574679/job/25111089581?pr=2878 |
Signed-off-by: Israel Blancas <iblancasa@gmail.com>
Head branch was pushed to by a user without write access
@jaronoff97 thanks! I fixed the issue now. I added the label in an extra service. |
Description:: fix the labels for the Service Monitor integration. Exclude the headless Service from the integration with SM.
Resolves #2965
Resolves #2877