Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

failed to flush the buffer: 413 #990

Open
discostur opened this issue Oct 4, 2022 · 1 comment
Open

failed to flush the buffer: 413 #990

discostur opened this issue Oct 4, 2022 · 1 comment

Comments

@discostur
Copy link

Problem

From time to time, fluentd has problems to flush the buffer to elasticsearch. I have to delete all buffer files, restart fluentd and then it works again. I cannot see any problems on elasticsearch site ...
Also tried to move some buffer files out of the buffer directory (oldest ones) but doesn help - fluentd tries to process the newer ones but fails:

2022-10-04 13:47:28 +0000 [warn]: #0 [elasticsearch] failed to flush the buffer. retry_times=0 next_retry_time=2022-10-04 13:48:29 +0000 chunk="5e92d0fc9643daab2b109dba76e018ad" error_class=Fluent::Plugin::ElasticsearchOutput::RecoverableRequestFailure error="could not push logs to Elasticsearch cluster ({:host=>\"192.168.3.57\", :port=>9200, :scheme=>\"http\", :user=>\"fluentd-dev\", :password=>\"obfuscated\"}, {:host=>\"192.168.3.58\", :port=>9200, :scheme=>\"http\", :user=>\"fluentd-dev\", :password=>\"obfuscated\"}, {:host=>\"192.168.3.59\", :port=>9200, :scheme=>\"http\", :user=>\"fluentd-dev\", :password=>\"obfuscated\"}): [413] "
  2022-10-04 13:47:28 +0000 [warn]: #0 /fluentd/vendor/bundle/ruby/2.7.0/gems/fluent-plugin-elasticsearch-5.1.5/lib/fluent/plugin/out_elasticsearch.rb:1138:in `rescue in send_bulk'
  2022-10-04 13:47:28 +0000 [warn]: #0 /fluentd/vendor/bundle/ruby/2.7.0/gems/fluent-plugin-elasticsearch-5.1.5/lib/fluent/plugin/out_elasticsearch.rb:1100:in `send_bulk'
  2022-10-04 13:47:28 +0000 [warn]: #0 /fluentd/vendor/bundle/ruby/2.7.0/gems/fluent-plugin-elasticsearch-5.1.5/lib/fluent/plugin/out_elasticsearch.rb:878:in `block in write'
  2022-10-04 13:47:28 +0000 [warn]: #0 /fluentd/vendor/bundle/ruby/2.7.0/gems/fluent-plugin-elasticsearch-5.1.5/lib/fluent/plugin/out_elasticsearch.rb:877:in `each'
  2022-10-04 13:47:28 +0000 [warn]: #0 /fluentd/vendor/bundle/ruby/2.7.0/gems/fluent-plugin-elasticsearch-5.1.5/lib/fluent/plugin/out_elasticsearch.rb:877:in `write'
  2022-10-04 13:47:28 +0000 [warn]: #0 /fluentd/vendor/bundle/ruby/2.7.0/gems/fluentd-1.14.6/lib/fluent/plugin/output.rb:1179:in `try_flush'
  2022-10-04 13:47:28 +0000 [warn]: #0 /fluentd/vendor/bundle/ruby/2.7.0/gems/fluentd-1.14.6/lib/fluent/plugin/output.rb:1500:in `flush_thread_run'
  2022-10-04 13:47:28 +0000 [warn]: #0 /fluentd/vendor/bundle/ruby/2.7.0/gems/fluentd-1.14.6/lib/fluent/plugin/output.rb:499:in `block (2 levels) in start'
  2022-10-04 13:47:28 +0000 [warn]: #0 /fluentd/vendor/bundle/ruby/2.7.0/gems/fluentd-1.14.6/lib/fluent/plugin_helper/thread.rb:78:in `block in thread_create'

Steps to replicate

    <match backend.**>
      @id elasticsearch
      @type elasticsearch
      include_tag_key true
      hosts 192.168.3.57:9200,192.168.3.58:9200,192.168.3.59:9200
      user %{fluentd-dev}
      password %{XXX}
      logstash_format true
      logstash_prefix dev
      logstash_dateformat %Y.%m. # defaults to "%Y.%m.%d"
      request_timeout 60s
      <buffer>
        @type file
        path /var/log/fluentd-buffers/system.buffer
        flush_mode interval
        flush_interval 5s
        flush_at_shutdown true
        retry_wait 60s
        retry_forever
      </buffer>
    </match>
    <match **>
      @type stdout
    </match>

Expected Behavior or What you need to ask

Logs should be pushed correctly to elasticsearch.

Using Fluentd and ES plugin versions

  • OS version: centos 7
  • Bare Metal or within Docker or Kubernetes: docker + k8s
  • ES version: 7.17.4
22-10-04 13:46:55 +0000 [info]: gem 'fluent-plugin-concat' version '2.5.0'
2022-10-04 13:46:55 +0000 [info]: gem 'fluent-plugin-dedot_filter' version '1.0.0'
2022-10-04 13:46:55 +0000 [info]: gem 'fluent-plugin-detect-exceptions' version '0.0.14'
2022-10-04 13:46:55 +0000 [info]: gem 'fluent-plugin-elasticsearch' version '5.1.5'
2022-10-04 13:46:55 +0000 [info]: gem 'fluent-plugin-grok-parser' version '2.6.2'
2022-10-04 13:46:55 +0000 [info]: gem 'fluent-plugin-json-in-json-2' version '1.0.2'
2022-10-04 13:46:55 +0000 [info]: gem 'fluent-plugin-kubernetes_metadata_filter' version '2.9.5'
2022-10-04 13:46:55 +0000 [info]: gem 'fluent-plugin-multi-format-parser' version '1.0.0'
2022-10-04 13:46:55 +0000 [info]: gem 'fluent-plugin-parser-cri' version '0.1.1'
2022-10-04 13:46:55 +0000 [info]: gem 'fluent-plugin-prometheus' version '2.0.2'
2022-10-04 13:46:55 +0000 [info]: gem 'fluent-plugin-record-modifier' version '2.1.0'
2022-10-04 13:46:55 +0000 [info]: gem 'fluent-plugin-rewrite-tag-filter' version '2.4.0'
2022-10-04 13:46:55 +0000 [info]: gem 'fluent-plugin-systemd' version '1.0.5'
2022-10-04 13:46:55 +0000 [info]: gem 'fluentd' version '1.14.6'
@discostur
Copy link
Author

  • if i delete all buffer + meta files and restart the fluentd container, logs are forwarded correctly
  • if i copy one old buffer + meta file to the buffer directory, the same error like before is printed and logging is broken; new logs are written to new buffer files
  • tried to see any difference between new / old buffer files but didn't see any major differences (what would explain why some files cannot be forwarded but others can)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant