Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Connector os cs and service are timing out #1413

Open
lbi22 opened this issue Feb 20, 2024 · 2 comments
Open

Connector os cs and service are timing out #1413

lbi22 opened this issue Feb 20, 2024 · 2 comments

Comments

@lbi22
Copy link

lbi22 commented Feb 20, 2024

We have multiple Windows instances in our company, but some of them are behaving weirdly. For a while, we haven't received any more metrics for some specific collector, more specifically, os, cs, and service.

Looking at the logs, it seems that they are timing out. I tried to increase the timeout with --scrape.timeout-margin=-20, but it didn't change. Here is an example of the output

.\wmi_exporter.exe : time="2024-02-20T23:53:07Z" level=warning msg="No where-clause specified for service collector. This will generate a very large number of metrics!" 
source="service.go:39"
At line:1 char:1
+ .\wmi_exporter.exe --scrape.timeout-margin=-20
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : NotSpecified: (time="2024-02-2..."service.go:39":String) [], RemoteException
    + FullyQualifiedErrorId : NativeCommandError
 
time="2024-02-20T23:53:07Z" level=info msg="Enabled collectors: cs, logical_disk, net, os, service, system, textfile, cpu" source="exporter.go:327"
time="2024-02-20T23:53:07Z" level=info msg="Starting WMI exporter (version=0.11.1, branch=master, revision=7890c9ce9193c258c7cfe0df3aa3ecf1830aa0bc)" 
source="exporter.go:351"
time="2024-02-20T23:53:07Z" level=info msg="Build context (go=go1.13.3, user=, date=20200419-19:54:05)" source="exporter.go:352"
time="2024-02-20T23:53:07Z" level=info msg="Starting server on :9182" source="exporter.go:355"
time="2024-02-20T23:53:46Z" level=warning msg="Collection timed out, still waiting for [cs os service]" source="exporter.go:189"
time="2024-02-20T23:53:54Z" level=warning msg="Collection timed out, still waiting for [cs os service]" source="exporter.go:189" 

I tried then to upgrade to the latest version of Windows Exporter,but the issue seems to be remaining, even though the only collector that is still failing is the service

PS C:\Program Files\windows_exporter> ./windows_exporter.exe --scrape.timeout-margin=-20
./windows_exporter.exe : ts=2024-02-20T23:47:38.887Z caller=service.go:93 level=warn collector=service msg="No where-clause specified for service collector. This will 
generate a very large number of metrics!"
At line:1 char:1
+ ./windows_exporter.exe --scrape.timeout-margin=-20
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : NotSpecified: (ts=2024-02-20T2...er of metrics!":String) [], RemoteException
    + FullyQualifiedErrorId : NativeCommandError
 
ts=2024-02-20T23:47:38.887Z caller=textfile.go:103 level=info collector=textfile msg="textfile collector directories: C:\\Program Files\\windows_exporter\\textfile_inputs"
ts=2024-02-20T23:47:38.948Z caller=exporter.go:165 level=info msg="Running as \Administrator"
ts=2024-02-20T23:47:38.948Z caller=exporter.go:172 level=info msg="Enabled collectors: cpu, logical_disk, physical_disk, os, system, textfile, cs, net, service"
ts=2024-02-20T23:47:38.948Z caller=exporter.go:225 level=info msg="Starting windows_exporter" version="(version=0.25.1, branch=heads/tags/v0.25.1, 
revision=f70fa009de541dc99ed210aa7e67c9550133ef02)"
ts=2024-02-20T23:47:38.948Z caller=exporter.go:226 level=info msg="Build context" build_context="(go=go1.21.5, platform=windows/amd64, user=, 
date=20240116-17:53:51, tags=unknown)"
ts=2024-02-20T23:47:38.949Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9182
ts=2024-02-20T23:47:38.949Z caller=tls_config.go:316 level=info msg="TLS is disabled." http2=false address=[::]:9182
ts=2024-02-20T23:48:24.650Z caller=prometheus.go:168 level=warn msg="Collection timed out, still waiting for [service]"
ts=2024-02-20T23:48:39.874Z caller=prometheus.go:168 level=warn msg="Collection timed out, still waiting for [service]"
ts=2024-02-20T23:48:54.647Z caller=prometheus.go:168 level=warn msg="Collection timed out, still waiting for [service]"
ts=2024-02-20T23:49:20.903Z caller=prometheus.go:168 level=warn msg="Collection timed out, still waiting for [service]"
ts=2024-02-20T23:49:24.645Z caller=prometheus.go:168 level=warn msg="Collection timed out, still waiting for [service]" 
Copy link

This issue has been marked as stale because it has been open for 90 days with no activity. This thread will be automatically closed in 30 days if no further activity occurs.

@github-actions github-actions bot added the Stale label May 21, 2024
@jkroepke
Copy link
Member

The services collector is really slow, maybe we have an solution for it: #1497

@github-actions github-actions bot removed the Stale label May 22, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants