Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Resiliency Features to the Splunk HEC Exporter #23821

Closed
cparkins opened this issue Jun 28, 2023 · 9 comments
Closed

Add Resiliency Features to the Splunk HEC Exporter #23821

cparkins opened this issue Jun 28, 2023 · 9 comments
Labels
enhancement New feature or request exporter/splunkhec

Comments

@cparkins
Copy link
Contributor

Component(s)

exporter/splunkhec

Is your feature request related to a problem? Please describe.

When using the Splunk HEC Exporter we have run into issues where the Splunk Endpoint is not available for extended periods of time but we have an alternative Splunk Endpoint that can receive the traffic. It would be nice to have the ability to send to the failover endpoint in the event of a failure. In addition having the ability to turn off an endpoint that is considered faulty with a circuit breaker pattern would also help reduce the traffic that is attempting to hit the already failing endpoints.

Describe the solution you'd like

I'd like to be able to send to have an alternative Splunk Server to send data to in the event the primary endpoint is unreachable with the ability to bypass the primary endpoint in the event that it is failing for an extended period of time.

Describe alternatives you've considered

The other option is to use the retry feature, but that will still send traffic to endpoints that may already be having issues.

Additional context

No response

@cparkins cparkins added enhancement New feature or request needs triage New item requiring triage labels Jun 28, 2023
@github-actions
Copy link
Contributor

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@atoulme
Copy link
Contributor

atoulme commented Jun 29, 2023

It would be nice to have the ability to send to the failover endpoint in the event of a failure.

That's not a splunkhecexporter specific feature, but probably something to try to implement at the collector level.

In addition having the ability to turn off an endpoint that is considered faulty with a circuit breaker pattern would also help reduce the traffic that is attempting to hit the already failing endpoints.

That's not specific to this exporter.

I'd like to be able to send to have an alternative Splunk Server to send data to in the event the primary endpoint is unreachable with the ability to bypass the primary endpoint in the event that it is failing for an extended period of time.

Typically this is best handled with a load balancer. Maybe a nginx load balancer or similar can help here.

@cparkins
Copy link
Contributor Author

cparkins commented Jun 29, 2023

It would be nice to have the ability to send to the failover endpoint in the event of a failure.
That's not a splunkhecexporter specific feature, but probably something to try to implement at the collector level.

I tried this route initially and ran into issues, I believe this issues where related to the fact that the Splunk HEC Exporter uses a buffer to send the requests.

In addition having the ability to turn off an endpoint that is considered faulty with a circuit breaker pattern would also help reduce the traffic that is attempting to hit the already failing endpoints.
That's not specific to this exporter.

I agree, this is not specific to this Collector but since I want to control both sets of endpoints it makes sense to implement this way for now.

I'd like to be able to send to have an alternative Splunk Server to send data to in the event the primary endpoint is unreachable with the ability to bypass the primary endpoint in the event that it is failing for an extended period of time.
Typically this is best handled with a load balancer. Maybe a nginx load balancer or similar can help here.

I'll look into this and see if it can help resolve our issues or not.

@greatestusername
Copy link
Contributor

It would be pretty cool to have a primary/secondary exporter situation as part of the collector as a larger concept.

E.G. My normal external monitoring tool is down. For mission critical metrics export them through this other exporter to an OSS metrics instance inside our network (or even S3, GCS, etc) so we at least have some visibility during the downtime.

The idea is great! I'd imagine it would require some system for exporters to notify that they are seeing "failures" and also a definition of "failure" (questions such as "500s only? Probably want it on 400s also... What about 300s? Should we be able to set a timeout?")

@cparkins
Copy link
Contributor Author

It would be pretty cool to have a primary/secondary exporter situation as part of the collector as a larger concept.

E.G. My normal external monitoring tool is down. For mission critical metrics export them through this other exporter to an OSS metrics instance inside our network (or even S3, GCS, etc) so we at least have some visibility during the downtime.

The idea is great! I'd imagine it would require some system for exporters to notify that they are seeing "failures" and also a definition of "failure" (questions such as "500s only? Probably want it on 400s also... What about 300s? Should we be able to set a timeout?")

This is not far from why we developed this solution. Essentially we have a primary Splunk cluster in one Cloud Region and primarily send data to it. But we have experienced issues where this instance has gone down for various reasons (high traffic, replication issues, queue backup). In those scenarios we want to be able to failover to our secondary regional cluster and send data there. During these outages we essentially have a blind spot in monitoring because no data is making it to Splunk. There are probably other ways to solve this problem but this solves the problem and allows buffering and other features that we currently don't have everywhere with one piece of Infrastructure.

@github-actions
Copy link
Contributor

This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.

Pinging code owners:

  • exporter/splunkhec: @atoulme @dmitryax
  • needs: Github issue template generation code needs this to generate the corresponding labels.

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@crobert-1
Copy link
Member

It looks like this feature request may be implemented by #20766. Is there any value in keeping this issue open or can we close it?

@crobert-1 crobert-1 removed the needs triage New item requiring triage label Oct 9, 2023
@atoulme
Copy link
Contributor

atoulme commented Oct 9, 2023

I personally would prefer we close this issue in favor of #20766.

@github-actions github-actions bot removed the Stale label Oct 10, 2023
@crobert-1
Copy link
Member

I'm going to close, but feel free to let us know if there's any feature specific to the Splunk HEC exporter that the failover connector can't handle. We can discuss specifics if necessary.

@crobert-1 crobert-1 closed this as not planned Won't fix, can't repro, duplicate, stale Oct 10, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request exporter/splunkhec
Projects
None yet
Development

No branches or pull requests

4 participants