Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

http: superfluous response.WriteHeader call #257

Closed
corford opened this issue Aug 16, 2019 · 17 comments · Fixed by #310
Closed

http: superfluous response.WriteHeader call #257

corford opened this issue Aug 16, 2019 · 17 comments · Fixed by #310
Labels
bug help wanted PRs for this issue are especially welcome

Comments

@corford
Copy link

corford commented Aug 16, 2019

Our statsd logs are seeing a lot of the following (this was present in v0.12.1 also).

time="2019-08-16T16:14:35Z" level=info msg="Starting StatsD -> Prometheus Exporter (version=0.12.2, branch=HEAD, revision=d308796e97674ae29ddc85aa046725d21d8766e9)" source="main.go:156"
time="2019-08-16T16:14:35Z" level=info msg="Build context (go=go1.12.7, user=root@63ddb3bb8807, date=20190725-13:15:09)" source="main.go:157"
time="2019-08-16T16:14:35Z" level=info msg="Accepting StatsD Traffic: UDP :9125, TCP :9125, Unixgram " source="main.go:158"
time="2019-08-16T16:14:35Z" level=info msg="Accepting Prometheus Requests on :9102" source="main.go:159"
2019/08/16 16:16:07 http: superfluous response.WriteHeader call from github.com/prometheus/client_golang/prometheus/promhttp.(*responseWriterDelegator).WriteHeader (delegator.go:58)
2019/08/16 16:16:37 http: superfluous response.WriteHeader call from github.com/prometheus/client_golang/prometheus/promhttp.(*responseWriterDelegator).WriteHeader (delegator.go:58)
2019/08/16 16:18:08 http: superfluous response.WriteHeader call from github.com/prometheus/client_golang/prometheus/promhttp.(*responseWriterDelegator).WriteHeader (delegator.go:58)
2019/08/16 16:18:39 http: superfluous response.WriteHeader call from github.com/prometheus/client_golang/prometheus/promhttp.(*responseWriterDelegator).WriteHeader (delegator.go:58)
2019/08/16 16:19:09 http: superfluous response.WriteHeader call from github.com/prometheus/client_golang/prometheus/promhttp.(*responseWriterDelegator).WriteHeader (delegator.go:58)
2019/08/16 16:19:40 http: superfluous response.WriteHeader call from github.com/prometheus/client_golang/prometheus/promhttp.(*responseWriterDelegator).WriteHeader (delegator.go:58)
2019/08/16 16:21:11 http: superfluous response.WriteHeader call from github.com/prometheus/client_golang/prometheus/promhttp.(*responseWriterDelegator).WriteHeader (delegator.go:58)
2019/08/16 16:24:14 http: superfluous response.WriteHeader call from github.com/prometheus/client_golang/prometheus/promhttp.(*responseWriterDelegator).WriteHeader (delegator.go:58)
2019/08/16 16:25:15 http: superfluous response.WriteHeader call from github.com/prometheus/client_golang/prometheus/promhttp.(*responseWriterDelegator).WriteHeader (delegator.go:58)

For context, this issue appears similar in nature: caddyserver/caddy#2537

@matthiasr
Copy link
Contributor

interesting … this seems to happen on a cycle of a little bit more than 30s but not always.

Do you have more than one Prometheus scraping these? Can you find out if their scrape cycles are aligned?

@matthiasr matthiasr added the bug label Aug 19, 2019
@matthiasr
Copy link
Contributor

I'm wondering if this is a race condition with the wroteHeader check. @beorn7?

@corford
Copy link
Author

corford commented Aug 19, 2019

Only one Prometheus is scraping this endpoint (on an interval of 5 seconds). statsd_exporter lives in its own container and prometheus lives in its own container. Both are on the same host machine and started together on system boot via docker-compose.

@corford
Copy link
Author

corford commented Aug 19, 2019

Here's a full 24 hour log dump (seems there is no discernible cycle or pattern to it): https://hastebin.com/raw/zeyakuqela

@beorn7
Copy link
Member

beorn7 commented Aug 19, 2019

The code in client_golang/prometheus/promhttp doesn't really check the wroteHeader state. That's only used in the Write method to call WriteHeader if necessary.

I'm wondering if we need wroteHeader at all. The wrapped implementation should handle the header writing on their own. We are just delegating here.

In any case, the superfluous call must originate from higher up the stack. Apparently, the standard http package doesn't protect against those calls. and neither is promhttp.

@beorn7
Copy link
Member

beorn7 commented Aug 19, 2019

Now I at least understand why we have the WriteHeader check: Because we want to observe the status code when WriteHeader is called, so we don't leave it to the underlying implementation to call it.

@beorn7
Copy link
Member

beorn7 commented Aug 19, 2019

The race @matthiasr proposed could certainly happen if multiple Write calls on the same ResponseWriter are happening concurrently. I'd assume, though, that a ResponseWriter is not generally assumed to be goroutine safe.

@beorn7
Copy link
Member

beorn7 commented Aug 19, 2019

I'll check implementations from the standard library. If they aren't concurrency-safe either, we know that it's just not OK to call Write concurrently.

@beorn7
Copy link
Member

beorn7 commented Aug 19, 2019

I have confirmed that the standard library implementation of a ResponseWriter is following the same assumptions (that Write and friends won't be called concurrently).

Thus, my theory here is that in statsd_exporter (and also in caddy?) some concurrent calling is involved that leads to spurious calls of WriteHeader. If I am right, then we have a data race here (which should also show up with the race detector).

In different news, my research revealed that Flush needs the same treatment WRT WriteHeader. @matthiasr I have sent you prometheus/client_golang#634 .

@svanscho
Copy link

svanscho commented Sep 12, 2019

Interesting, we are seeing the same issue here as well. Happens about every 30s with occasional skips to 1 minute intervals.

@corford
Copy link
Author

corford commented Jan 28, 2020

@beorn7 were you able to confirm the data race?

@beorn7
Copy link
Member

beorn7 commented Jan 28, 2020

I'm not working on this.

@corford
Copy link
Author

corford commented Jan 28, 2020

I'd love to solve this but I don't currently code Go (...and it doesn't seem like a very easy "first issue" to try and pick up).

@vaibhavkhurana2018
Copy link
Contributor

Do we have anythin on this, I am also facing the same issue. In my case the interval is every 3 secs.

2020/02/12 08:12:33 http: superfluous response.WriteHeader call from github.com/prometheus/client_golang/prometheus/promhttp.(*responseWriterDelegator).WriteHeader (delegator.go:58)
2020/02/12 08:12:36 http: superfluous response.WriteHeader call from github.com/prometheus/client_golang/prometheus/promhttp.(*responseWriterDelegator).WriteHeader (delegator.go:58)
2020/02/12 08:12:39 http: superfluous response.WriteHeader call from github.com/prometheus/client_golang/prometheus/promhttp.(*responseWriterDelegator).WriteHeader (delegator.go:58)
2020/02/12 08:12:42 http: superfluous response.WriteHeader call from github.com/prometheus/client_golang/prometheus/promhttp.(*responseWriterDelegator).WriteHeader (delegator.go:58)

I do have multiple prom scraping the exported as i wanted to create a HA for the metrics. But i doubt it is because of that as it is happening even with a single prom scraping the metrics.

@rayjanoka
Copy link

I had something running a health-check again the /metrics path causing the error log....I'm still not sure why it does like that though.

@matthiasr
Copy link
Contributor

as noted in prometheus/graphite_exporter#123, this is likely fixable by updating the Prometheus client library to 1.5.1 or later.

@matthiasr matthiasr added the help wanted PRs for this issue are especially welcome label Apr 21, 2020
@isvicy
Copy link

isvicy commented Apr 24, 2020

image

Could someone explain the consequences of this problem? Could it cause the value of a count metric pulled by other services, such as grafana, decrease?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug help wanted PRs for this issue are especially welcome
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants