Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prometheus exp. format: Detect and sort series with label values containing numbers numerically not lexicographically #1442

Open
linuxgcc opened this issue Feb 2, 2024 · 1 comment

Comments

@linuxgcc
Copy link

linuxgcc commented Feb 2, 2024

When I accessed node_exporter server from port 9100, there was a problem with the sorting of the data I obtained. This error affected the viewing of CPU、disk and netcard device performance data. Although the data was accurate, it affected readability.
The data obtained by my machine is as follows:

# HELP node_network_address_assign_type Network device property: address_assign_type
# TYPE node_network_address_assign_type gauge
node_network_address_assign_type{device="ens3"} 0
node_network_address_assign_type{device="eth1"} 1
node_network_address_assign_type{device="eth10"} 1
node_network_address_assign_type{device="eth11"} 1
node_network_address_assign_type{device="eth12"} 1
node_network_address_assign_type{device="eth13"} 1
node_network_address_assign_type{device="eth14"} 1
node_network_address_assign_type{device="eth15"} 1
node_network_address_assign_type{device="eth16"} 1
node_network_address_assign_type{device="eth2"} 1
node_network_address_assign_type{device="eth3"} 1
node_network_address_assign_type{device="eth4"} 1
node_network_address_assign_type{device="eth5"} 1
node_network_address_assign_type{device="eth6"} 1
node_network_address_assign_type{device="eth7"} 1
node_network_address_assign_type{device="eth8"} 1
node_network_address_assign_type{device="eth9"} 1
node_network_address_assign_type{device="lo"} 0
# HELP node_schedstat_running_seconds_total Number of seconds CPU spent running a process.
# TYPE node_schedstat_running_seconds_total counter
node_schedstat_running_seconds_total{cpu="0"} 330.698211353
node_schedstat_running_seconds_total{cpu="1"} 1656.137375267
node_schedstat_running_seconds_total{cpu="10"} 312.818634313
node_schedstat_running_seconds_total{cpu="11"} 1590.5410981
node_schedstat_running_seconds_total{cpu="12"} 542.222016261
node_schedstat_running_seconds_total{cpu="13"} 539.735986571
node_schedstat_running_seconds_total{cpu="14"} 523.277367904
node_schedstat_running_seconds_total{cpu="15"} 809.123652669
node_schedstat_running_seconds_total{cpu="2"} 1388.031920081
node_schedstat_running_seconds_total{cpu="3"} 989.502352614
node_schedstat_running_seconds_total{cpu="4"} 765.031883684
node_schedstat_running_seconds_total{cpu="5"} 1575.901650118
node_schedstat_running_seconds_total{cpu="6"} 607.088239229
node_schedstat_running_seconds_total{cpu="7"} 581.83147809
node_schedstat_running_seconds_total{cpu="8"} 641.048946244
node_schedstat_running_seconds_total{cpu="9"} 3189.396255854
# HELP node_schedstat_timeslices_total Number of timeslices executed by CPU.
# TYPE node_schedstat_timeslices_total counter
node_schedstat_timeslices_total{cpu="0"} 6.077865e+06
node_schedstat_timeslices_total{cpu="1"} 3.7093743e+07
node_schedstat_timeslices_total{cpu="10"} 6.096012e+06
node_schedstat_timeslices_total{cpu="11"} 4.3410488e+07
node_schedstat_timeslices_total{cpu="12"} 1.0455821e+07
node_schedstat_timeslices_total{cpu="13"} 1.4927241e+07
node_schedstat_timeslices_total{cpu="14"} 1.3071297e+07
node_schedstat_timeslices_total{cpu="15"} 3.4771305e+07
node_schedstat_timeslices_total{cpu="2"} 3.2173876e+07
node_schedstat_timeslices_total{cpu="3"} 1.9850484e+07
node_schedstat_timeslices_total{cpu="4"} 1.4338516e+07
node_schedstat_timeslices_total{cpu="5"} 6.1685878e+07
node_schedstat_timeslices_total{cpu="6"} 1.3234021e+07
node_schedstat_timeslices_total{cpu="7"} 1.6322225e+07
node_schedstat_timeslices_total{cpu="8"} 1.4246535e+07
node_schedstat_timeslices_total{cpu="9"} 4.4003518e+07
# HELP node_schedstat_waiting_seconds_total Number of seconds spent by processing waiting for this CPU.
# TYPE node_schedstat_waiting_seconds_total counter
node_schedstat_waiting_seconds_total{cpu="0"} 64.573517868
node_schedstat_waiting_seconds_total{cpu="1"} 119.506758061
node_schedstat_waiting_seconds_total{cpu="10"} 63.915674293
node_schedstat_waiting_seconds_total{cpu="11"} 144.224748318
node_schedstat_waiting_seconds_total{cpu="12"} 51.683863518
node_schedstat_waiting_seconds_total{cpu="13"} 58.396640696
node_schedstat_waiting_seconds_total{cpu="14"} 52.763283568
node_schedstat_waiting_seconds_total{cpu="15"} 94.317983721
node_schedstat_waiting_seconds_total{cpu="2"} 93.302203595
node_schedstat_waiting_seconds_total{cpu="3"} 80.993566733
node_schedstat_waiting_seconds_total{cpu="4"} 68.568701689
node_schedstat_waiting_seconds_total{cpu="5"} 153.731203724
node_schedstat_waiting_seconds_total{cpu="6"} 60.660492674
node_schedstat_waiting_seconds_total{cpu="7"} 75.270339783
node_schedstat_waiting_seconds_total{cpu="8"} 57.980952874
node_schedstat_waiting_seconds_total{cpu="9"} 137.482166643
@bwplotka bwplotka changed the title Netcard, disk, cpu, performance data are sorted incorrectly Prometheus exp. format: Detect and sort series with label values containing numbers numerically not lexicographically Apr 10, 2024
@bwplotka
Copy link
Member

Thanks for proposing, interesting idea! I tried to rename the title to express what actually is proposed by this issue, am I correct?

First of all, this is not an error -- all specs are generally, explicitly saying that (e.g. for Prometheus format):

All lines for a given metric must be provided as one single group, with the optional HELP and TYPE lines first (in no particular order). Beyond that, reproducible sorting in repeated expositions is preferred but not required, i.e. do not sort if the computational cost is prohibitive.

In client_golang we go as far as sorting those series so the output is reproducible, just lexicographically which is what usually users/scrapers would expect.

You mention the only aspect motivating this change was readability:

  • Readability is subjective, but one aspect is how surprising this is. Once you realized the behaviour (that data is always sorted lexicographically) is it still unreadable?
  • How problematic this is from 0 to 10, where 10 is unusable? To me it's only minor issue e.g. 1
  • What we expect to do on complex label values like mypod-4-23 or 2-eth1 or le=1.24?
  • How expensive is to even parse those numbers in on the fly? (probably prohibitive)

Also as shown in #1443 this is not cheap to 2020 ns/op 265 B/op 8 allocs/op vs 88625 ns/op 52093 B/op 626 allocs/op for single comparision (we didn't see the benchmark code though). Perhaps possible optimize but trivial number detecting and parsing will be costly on each scrape (or new metric creation).

However, this might be still useful for some, so:

  • I would gather more data from multiple maintainers/user if this what we want to introduce to all SDKs/projects by default. This is to ensure this behaviour is not actually surprising AND if the readability punishment is big enough for the effort.
  • Perhaps experiment on this as a separate layer (reparsing and resorting for now) or as a separate option to client_golang or even HTTP parameter (later on)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants