Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

InvalidArgument: Points must be written in order. #127

Closed
vsekhar opened this issue Jan 1, 2021 · 4 comments
Closed

InvalidArgument: Points must be written in order. #127

vsekhar opened this issue Jan 1, 2021 · 4 comments
Assignees
Labels
bug Something isn't working monitoring priority: p2

Comments

@vsekhar
Copy link

vsekhar commented Jan 1, 2021

After starting two local instances of my server and letting them run for about a minute, I get the following error about every minute:

2020/12/31 21:41:40 rpc error: code = InvalidArgument desc = One or more TimeSeries could not be written: Points must be written in order. One or more of the points specified had an older start time than the most recent point.: timeSeries[1-12]

Based on some earlier comments, I tried ensuring I am setting the host and service names without success.

r.peerCount = metric.Must(meter).NewInt64ValueRecorder(
		"peers",
		metric.WithDescription("Number of peers found and joined"),
	).Bind([]label.KeyValue{
		label.String("host.id", nodeName),
		label.String("service.name", "peer_service"),
		label.String("service.instance.id", nodeName),
	}...)

I also tried setting the OC_RESOURCE_LABEL=host.id=instance_123 environment variable without success.

What is the correct way to provide an instance_id when not running on GCE or GKE? Or is there some other issue here?

@dashpole dashpole added the bug Something isn't working label Jan 12, 2021
@punya
Copy link
Contributor

punya commented Jan 28, 2021

Is nodeName the same between the two local instances of your server?

@vsekhar
Copy link
Author

vsekhar commented Jan 28, 2021

It is different. Each instance randomly generates a name suffix.

@dashpole dashpole self-assigned this Aug 30, 2022
@dashpole
Copy link
Contributor

Sorry for the slow response. Which monitored resource is your metric being written to?

@dashpole
Copy link
Contributor

dashpole commented Feb 6, 2023

I'm going to close this for now, but feel free to reopen if you are still having problems. For others that find this, I would recommend using the gcp resource detector: https://github.com/open-telemetry/opentelemetry-go-contrib/tree/main/detectors/gcp. Also, make sure the export interval of your periodic reader is at least 10 seconds.

@dashpole dashpole closed this as completed Feb 6, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working monitoring priority: p2
Projects
None yet
Development

No branches or pull requests

3 participants