You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm working on a prototype to instrument application metrics to Prometheus for an application hosted in ECS with an application load balanced fargate service. The AWS OTEL collector is running as a sidecar container - as below:
This is working well with a single task, however, with multiple instantiations of the task definition, I'm unable to distinguish which instance the metric has been pulled from, resulting in inaccurate data.
Each metric has an instance label, however, it's the same value across all instances, 0.0.0.0:8080, which is the scrape target.
As a result, each collector is writing the same metric plus labels to AWS Managed Prometheus and I have found no way to distinguish.
I tried tried the following config -
resource_to_telemetry_conversion:
enabled: true
Which added labels service_name and service_instance_id, however the values are not unique - aws-otel-app (job_name) and 0.0.0.0:8080 (scrape_target) for all metrics.
I can see the target metric has the following ECS related labels:
aws_ecs_cluster_name, aws_ecs_launchtype, aws_ecs_service_name, aws_ecs_task_arn, aws_ecs_task_family, aws_ecs_task_id, aws_ecs_task_known_status, aws_ecs_task_launch_type, aws_ecs_task_pull_started_at, aws_ecs_task_pull_stopped_at, aws_ecs_task_revision.
Applying the aws_ecs_task_id label to all other metrics would be useful but I have been unable to do this successfully.
Hello,
I'm working on a prototype to instrument application metrics to Prometheus for an application hosted in ECS with an application load balanced fargate service. The AWS OTEL collector is running as a sidecar container - as below:
This is working well with a single task, however, with multiple instantiations of the task definition, I'm unable to distinguish which instance the metric has been pulled from, resulting in inaccurate data.
Each metric has an instance label, however, it's the same value across all instances, 0.0.0.0:8080, which is the scrape target.
As a result, each collector is writing the same metric plus labels to AWS Managed Prometheus and I have found no way to distinguish.
I tried tried the following config -
resource_to_telemetry_conversion:
enabled: true
Which added labels service_name and service_instance_id, however the values are not unique - aws-otel-app (job_name) and 0.0.0.0:8080 (scrape_target) for all metrics.
I can see the target metric has the following ECS related labels:
aws_ecs_cluster_name, aws_ecs_launchtype, aws_ecs_service_name, aws_ecs_task_arn, aws_ecs_task_family, aws_ecs_task_id, aws_ecs_task_known_status, aws_ecs_task_launch_type, aws_ecs_task_pull_started_at, aws_ecs_task_pull_stopped_at, aws_ecs_task_revision.
Applying the aws_ecs_task_id label to all other metrics would be useful but I have been unable to do this successfully.
I appreciate any help or guidance,
Thanks,
Matt
adot-config -
The text was updated successfully, but these errors were encountered: