Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[exporterhelper] Batch sender produces inconsistent and suboptimal batch sizes #9952

Open
carsonip opened this issue Apr 12, 2024 · 2 comments · May be fixed by #9891
Open

[exporterhelper] Batch sender produces inconsistent and suboptimal batch sizes #9952

carsonip opened this issue Apr 12, 2024 · 2 comments · May be fixed by #9891
Labels
area:exporter bug Something isn't working

Comments

@carsonip
Copy link
Contributor

Describe the bug

From #8122 (comment)

In my experiments, the batch sender is producing inconsistent batch sizes which could be lower than desired due to goroutine scheduling even after #9761 . The scenario I usually run into is that given queue sender concurrency = batch sender concurrency limit = N, and they are all blocked on send, when the send eventually returns, activeRequest will first be set to N-1, then a new consumer goroutine comes in and increments the active request, then realize N-1+1 == concurrencyLimit, and will send off the request right away, causing an undesirably small request to be exported without batching.

I tried to fix it by resetting bs.activeRequests to 0 next to close(b.done). While it fixes for sendMergeBatch, it will not work for sendMergeSplitBatch since it may be exporting something outside of activeBatch. I assume there might be a way around this for merge split batch, but I'm not sure if it is worth the trouble and complexity to workaround unfavorable goroutine scheduling.

Steps to reproduce

Use batch sender with sending_queue num_consumers=100, send events to it.

What did you expect to see?

Batch sender consistently produces batch of size 100

What did you see instead?

Batch sender produces first batch of size 100, then most of the time size is 1.

What version did you use?

v0.97.0

What config did you use?

receivers:
  otlp:
    protocols:
      grpc:
        endpoint: localhost:4317
      http:
        endpoint: localhost:4318

exporters:
  elasticsearch:
    endpoints: [ "http://localhost:9200" ]
    logs_index: foo
    user: ****
    password: ****
    sending_queue:
      enabled: true
      storage: file_storage/elasticsearchexporter
      num_consumers: 100
      queue_size: 10000000
    retry:
      enabled: true
      max_requests: 10000

extensions:
  file_storage/elasticsearchexporter:
    directory: /tmp/otelcol/file_storage/elasticsearchexporter

service:
  extensions: [file_storage/elasticsearchexporter]
  pipelines:
    logs:
      receivers: [otlp]
      processors: []
      exporters: [elasticsearch]

Environment

Linux Mint 21.3

Additional context

@carsonip carsonip added the bug Something isn't working label Apr 12, 2024
@TylerHelmuth
Copy link
Member

/cc @dmitryax

@carsonip
Copy link
Contributor Author

@dmitryax question: what's the reason behind the concurrency check in the first place? If it is just to avoid meaningless waits when all goroutines are blocked by the same batch, then #9891 should fix it. If it is to avoid waits when all goroutines are blocked by different batches, I believe it is inherently vulnerable to the issue I mentioned in the bug report, and may need a different solution.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area:exporter bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants