Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allowing routes to specify the idle socket timeout #72362

Closed
wants to merge 4 commits into from

Conversation

kobelb
Copy link
Contributor

@kobelb kobelb commented Jul 17, 2020

This will allow Fleet to configure their routes to have an idle socket timeout that is longer than the global HTTP server's idle socket timeout.

Testing this became quite the challenge...

  • Fake timers don't work with net.Socket#setTimeout. Using the fake timers has no affect on the socket timeout behaviors.
  • Using real timers within unit tests is a bad idea, because they're a frequent source of flakiness. This is further complicated by what I consider to be a bug in hapi where the "socket timeout" must be larger than the "payload timeout", and the payload timeout's default is 10 seconds. This would require us to have at least a 10 second period of idleness in our tests, and I'd prefer to not have to do so.
  • net.Socket#timeout does exist, and it's being consumed within Node.js itself. However, it's not documented. I've opened up doc: add net.Socket#timeout nodejs/node#34543 to document this property, so we can take advantage of it in our tests and not have to rely on real timers.

This PR only allows the route definition to override the "idle socket timeout", it does not allow the route definition to override the "payload timeout" or the "response timeout" as we haven't had any requests for this behavior. This interacts interestingly with the bug in hapi because the idle socket timeout must be larger than the "payload timeout" which defaults to 10 seconds. Until this bug is resolved, that means that the "idle socket timeout" must be larger than 10 seconds, or Hapi will throw an error when the route is defined. I'd prefer to not have to add our own validation for the idle socket timeout, as this will have to be removed when the hapi bug is resolved; however, it's somewhat confusing for the consumers because we leak details about hapi.

@roncohen
Copy link
Contributor

hey @kobelb. Any chance to move this forward? let us know how we can help

@kobelb
Copy link
Contributor Author

kobelb commented Jul 23, 2020

hey @kobelb. Any chance to move this forward? let us know how we can help

I was waiting to catch-up with @joshdover, who returns from PTO shortly, to discuss testing strategies. I'm hopeful we'll be able to move this forward shortly.

@kobelb
Copy link
Contributor Author

kobelb commented Jul 29, 2020

#73103 merged prior to this PR, causing conflicts... 👀

@kobelb
Copy link
Contributor Author

kobelb commented Jul 29, 2020

@restrry The work that was done as part of https://github.com/elastic/kibana/pull/73103/files conflicts with the approach in this PR. I saw some comments about you hesitating to allow route definitions to specify the "payload timeout", "response timeout" and the "idle socket timeout". However, Fleet would now like to specify the "idle socket timeout". This will cause a conflict with the workaround implemented here to deal with what I consider a bug in hapi.

If the user specifies either the "payload timeout" and the "response timeout", we can increment the "idle socket timeout" by a millisecond to work around this. However, if they specify the "idle socket timeout" in addition to these settings, we're going to potentially violate Hapi's rules and leak the fact that we're using Hapi to consumers.

The only other option I've thought of is to leave it up to the consumer to ensure these timeouts are configured without violating Hapi's rules, but this seems even worse.

@mshustov
Copy link
Contributor

mshustov commented Jul 30, 2020

However, Fleet would now like to specify the "idle socket timeout". This will cause a conflict with the workaround implemented here to deal with what I consider a bug in hapi.

My point was that we are okay to support this when necessary, so it's not a problem to extend the definition of the route config.

However, if they specify the "idle socket timeout" in addition to these settings, we're going to potentially violate Hapi's rules and leak the fact that we're using Hapi to consumers.

We can add this validation on the platform level as a temporary measurement to buy some time to contribute to Hapi to fix the problem. Another option is to call request.socket.setTimeout manually on the platform level to avoid relying on Hapi logic.

Using real timers within unit tests is a bad idea, because they're a frequent source of flakiness. This is further complicated by what I consider to be a bug in hapi where the "socket timeout" must be larger than the "payload timeout", and the payload timeout's default is 10 seconds.

Introducing the payload timeout option in #73103 allows us to reduce this time. We struggle with the Hapi logic, but:

  • we can get rid of this limitation when Hapi bug fixed
  • it's okay for the platform tests to be coupled to Hapi. The fleet might not test this logic if it's covered by the platform.

This will allow Fleet to configure their routes to have an idle socket timeout that is longer than the global HTTP server's idle socket timeout.

Is there an issue to understand the use-case?

@kibanamachine
Copy link
Contributor

💔 Build Failed

Failed CI Steps


Test Failures

Kibana Pipeline / kibana-oss-agent / Chrome UI Functional Tests.test/functional/apps/context/_date_nanos·js.context app context view for date_nanos displays predessors - anchor - successors in right order

Link to Jenkins

Standard Out

Failed Tests Reporter:
  - Test has failed 16 times on tracked branches: https://github.com/elastic/kibana/issues/58815

[00:00:00]       │
[00:00:12]         └-: context app
[00:00:12]           └-> "before all" hook
[00:00:12]           └-> "before all" hook
[00:00:12]             │ info [logstash_functional] Loading "mappings.json"
[00:00:12]             │ info [logstash_functional] Loading "data.json.gz"
[00:00:12]             │ info [o.e.c.m.MetadataCreateIndexService] [kibana-ci-immutable-ubuntu-16-tests-xl-1596093237568296121] [logstash-2015.09.22] creating index, cause [api], templates [], shards [1]/[0]
[00:00:13]             │ info [o.e.c.r.a.AllocationService] [kibana-ci-immutable-ubuntu-16-tests-xl-1596093237568296121] current.health="GREEN" message="Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[logstash-2015.09.22][0]]])." previous.health="YELLOW" reason="shards started [[logstash-2015.09.22][0]]"
[00:00:13]             │ info [logstash_functional] Created index "logstash-2015.09.22"
[00:00:13]             │ debg [logstash_functional] "logstash-2015.09.22" settings {"index":{"analysis":{"analyzer":{"url":{"max_token_length":"1000","tokenizer":"uax_url_email","type":"standard"}}},"number_of_replicas":"0","number_of_shards":"1"}}
[00:00:13]             │ info [o.e.c.m.MetadataCreateIndexService] [kibana-ci-immutable-ubuntu-16-tests-xl-1596093237568296121] [logstash-2015.09.20] creating index, cause [api], templates [], shards [1]/[0]
[00:00:13]             │ info [o.e.c.r.a.AllocationService] [kibana-ci-immutable-ubuntu-16-tests-xl-1596093237568296121] current.health="GREEN" message="Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[logstash-2015.09.20][0]]])." previous.health="YELLOW" reason="shards started [[logstash-2015.09.20][0]]"
[00:00:13]             │ info [logstash_functional] Created index "logstash-2015.09.20"
[00:00:13]             │ debg [logstash_functional] "logstash-2015.09.20" settings {"index":{"analysis":{"analyzer":{"url":{"max_token_length":"1000","tokenizer":"uax_url_email","type":"standard"}}},"number_of_replicas":"0","number_of_shards":"1"}}
[00:00:13]             │ info [o.e.c.m.MetadataCreateIndexService] [kibana-ci-immutable-ubuntu-16-tests-xl-1596093237568296121] [logstash-2015.09.21] creating index, cause [api], templates [], shards [1]/[0]
[00:00:13]             │ info [o.e.c.r.a.AllocationService] [kibana-ci-immutable-ubuntu-16-tests-xl-1596093237568296121] current.health="GREEN" message="Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[logstash-2015.09.21][0]]])." previous.health="YELLOW" reason="shards started [[logstash-2015.09.21][0]]"
[00:00:13]             │ info [logstash_functional] Created index "logstash-2015.09.21"
[00:00:13]             │ debg [logstash_functional] "logstash-2015.09.21" settings {"index":{"analysis":{"analyzer":{"url":{"max_token_length":"1000","tokenizer":"uax_url_email","type":"standard"}}},"number_of_replicas":"0","number_of_shards":"1"}}
[00:00:22]             │ info progress: 7520
[00:00:28]             │ info [logstash_functional] Indexed 4633 docs into "logstash-2015.09.22"
[00:00:28]             │ info [logstash_functional] Indexed 4757 docs into "logstash-2015.09.20"
[00:00:28]             │ info [logstash_functional] Indexed 4614 docs into "logstash-2015.09.21"
[00:00:28]             │ info [visualize] Loading "mappings.json"
[00:00:28]             │ info [visualize] Loading "data.json"
[00:00:28]             │ info [o.e.c.m.MetadataDeleteIndexService] [kibana-ci-immutable-ubuntu-16-tests-xl-1596093237568296121] [.kibana_1/CDx02xpJQuWDkMHhSniTYQ] deleting index
[00:00:28]             │ info [visualize] Deleted existing index [".kibana_1"]
[00:00:28]             │ info [o.e.c.m.MetadataCreateIndexService] [kibana-ci-immutable-ubuntu-16-tests-xl-1596093237568296121] [.kibana] creating index, cause [api], templates [], shards [1]/[1]
[00:00:28]             │ info [visualize] Created index ".kibana"
[00:00:28]             │ debg [visualize] ".kibana" settings {"index":{"number_of_replicas":"1","number_of_shards":"1"}}
[00:00:28]             │ info [o.e.c.m.MetadataMappingService] [kibana-ci-immutable-ubuntu-16-tests-xl-1596093237568296121] [.kibana/I_B2Ovu0T8yeR0q7NvW6Cw] update_mapping [_doc]
[00:00:28]             │ info [visualize] Indexed 12 docs into ".kibana"
[00:00:29]             │ info [o.e.c.m.MetadataMappingService] [kibana-ci-immutable-ubuntu-16-tests-xl-1596093237568296121] [.kibana/I_B2Ovu0T8yeR0q7NvW6Cw] update_mapping [_doc]
[00:00:29]             │ debg Migrating saved objects
[00:00:29]             │ proc [kibana]   log   [07:33:59.285] [info][savedobjects-service] Creating index .kibana_2.
[00:00:29]             │ info [o.e.c.m.MetadataCreateIndexService] [kibana-ci-immutable-ubuntu-16-tests-xl-1596093237568296121] [.kibana_2] creating index, cause [api], templates [], shards [1]/[1]
[00:00:29]             │ info [o.e.c.r.a.AllocationService] [kibana-ci-immutable-ubuntu-16-tests-xl-1596093237568296121] updating number_of_replicas to [0] for indices [.kibana_2]
[00:00:29]             │ proc [kibana]   log   [07:33:59.384] [info][savedobjects-service] Reindexing .kibana to .kibana_1
[00:00:29]             │ info [o.e.c.m.MetadataCreateIndexService] [kibana-ci-immutable-ubuntu-16-tests-xl-1596093237568296121] [.kibana_1] creating index, cause [api], templates [], shards [1]/[1]
[00:00:29]             │ info [o.e.c.r.a.AllocationService] [kibana-ci-immutable-ubuntu-16-tests-xl-1596093237568296121] updating number_of_replicas to [0] for indices [.kibana_1]
[00:00:29]             │ info [o.e.c.m.MetadataCreateIndexService] [kibana-ci-immutable-ubuntu-16-tests-xl-1596093237568296121] [.tasks] creating index, cause [auto(task api)], templates [], shards [1]/[1]
[00:00:29]             │ info [o.e.c.r.a.AllocationService] [kibana-ci-immutable-ubuntu-16-tests-xl-1596093237568296121] updating number_of_replicas to [0] for indices [.tasks]
[00:00:29]             │ info [o.e.t.LoggingTaskListener] [kibana-ci-immutable-ubuntu-16-tests-xl-1596093237568296121] 851 finished with response BulkByScrollResponse[took=97.9ms,timed_out=false,sliceId=null,updated=0,created=12,deleted=0,batches=1,versionConflicts=0,noops=0,retries=0,throttledUntil=0s,bulk_failures=[],search_failures=[]]
[00:00:29]             │ info [o.e.c.m.MetadataDeleteIndexService] [kibana-ci-immutable-ubuntu-16-tests-xl-1596093237568296121] [.kibana/I_B2Ovu0T8yeR0q7NvW6Cw] deleting index
[00:00:29]             │ proc [kibana]   log   [07:33:59.854] [info][savedobjects-service] Migrating .kibana_1 saved objects to .kibana_2
[00:00:29]             │ proc [kibana]   log   [07:33:59.881] [error][savedobjects-service] Error: Unable to migrate the corrupt Saved Object document index-pattern:test_index*. To prevent Kibana from performing a migration on every restart, please delete or fix this document by ensuring that the namespace and type in the document's id matches the values in the namespace and type fields.
[00:00:29]             │ info [o.e.c.m.MetadataMappingService] [kibana-ci-immutable-ubuntu-16-tests-xl-1596093237568296121] [.kibana_2/GTMHRa_AR--UOUx-7dwHKQ] update_mapping [_doc]
[00:00:29]             │ info [o.e.c.m.MetadataMappingService] [kibana-ci-immutable-ubuntu-16-tests-xl-1596093237568296121] [.kibana_2/GTMHRa_AR--UOUx-7dwHKQ] update_mapping [_doc]
[00:00:29]             │ proc [kibana]   log   [07:34:00.015] [info][savedobjects-service] Pointing alias .kibana to .kibana_2.
[00:00:29]             │ proc [kibana]   log   [07:34:00.094] [info][savedobjects-service] Finished in 813ms.
[00:00:29]             │ debg applying update to kibana config: {"accessibility:disableAnimations":true,"dateFormat:tz":"UTC"}
[00:00:29]             │ info [o.e.c.m.MetadataMappingService] [kibana-ci-immutable-ubuntu-16-tests-xl-1596093237568296121] [.kibana_2/GTMHRa_AR--UOUx-7dwHKQ] update_mapping [_doc]
[00:00:31]             │ debg replacing kibana config doc: {"defaultIndex":"logstash-*"}
[00:00:32]             │ debg navigating to discover url: http://localhost:6121/app/discover#/
[00:00:32]             │ debg navigate to: http://localhost:6121/app/discover#/
[00:00:32]             │ debg browser[INFO] http://localhost:6121/app/discover?_t=1596094442441#/ 341 Refused to execute inline script because it violates the following Content Security Policy directive: "script-src 'unsafe-eval' 'self'". Either the 'unsafe-inline' keyword, a hash ('sha256-P5polb1UreUSOe5V/Pv7tc+yeZuJXiOi/3fqhGsU7BE='), or a nonce ('nonce-...') is required to enable inline execution.
[00:00:32]             │
[00:00:32]             │ debg browser[INFO] http://localhost:6121/bundles/app/core/bootstrap.js 42:19 "^ A single error about an inline script not firing due to content security policy is expected!"
[00:00:32]             │ debg ... sleep(700) start
[00:00:33]             │ debg ... sleep(700) end
[00:00:33]             │ debg returned from get, calling refresh
[00:00:33]             │ debg browser[INFO] http://localhost:6121/app/discover?_t=1596094442441#/ 341 Refused to execute inline script because it violates the following Content Security Policy directive: "script-src 'unsafe-eval' 'self'". Either the 'unsafe-inline' keyword, a hash ('sha256-P5polb1UreUSOe5V/Pv7tc+yeZuJXiOi/3fqhGsU7BE='), or a nonce ('nonce-...') is required to enable inline execution.
[00:00:33]             │
[00:00:33]             │ debg browser[INFO] http://localhost:6121/bundles/app/core/bootstrap.js 42:19 "^ A single error about an inline script not firing due to content security policy is expected!"
[00:00:33]             │ debg currentUrl = http://localhost:6121/app/discover#/
[00:00:33]             │          appUrl = http://localhost:6121/app/discover#/
[00:00:33]             │ debg TestSubjects.find(kibanaChrome)
[00:00:33]             │ debg Find.findByCssSelector('[data-test-subj="kibanaChrome"]') with timeout=60000
[00:00:34]             │ debg browser[INFO] http://localhost:6121/34906/bundles/kbn-ui-shared-deps/kbn-ui-shared-deps.js 452:106112 "INFO: 2020-07-30T07:34:04Z
[00:00:34]             │        Adding connection to http://localhost:6121/elasticsearch
[00:00:34]             │
[00:00:34]             │      "
[00:00:34]             │ debg ... sleep(501) start
[00:00:35]             │ debg ... sleep(501) end
[00:00:35]             │ debg in navigateTo url = http://localhost:6121/app/discover#/
[00:00:35]             │ debg TestSubjects.exists(statusPageContainer)
[00:00:35]             │ debg Find.existsByDisplayedByCssSelector('[data-test-subj="statusPageContainer"]') with timeout=2500
[00:00:37]             │ debg --- retry.tryForTime error: [data-test-subj="statusPageContainer"] is not displayed
[00:02:03]           └-: context view for date_nanos
[00:02:03]             └-> "before all" hook
[00:02:03]             └-> "before all" hook
[00:02:03]               │ info [date_nanos] Loading "mappings.json"
[00:02:03]               │ info [date_nanos] Loading "data.json"
[00:02:03]               │ info [o.e.c.m.MetadataCreateIndexService] [kibana-ci-immutable-ubuntu-16-tests-xl-1596093237568296121] [date-nanos] creating index, cause [api], templates [], shards [1]/[0]
[00:02:03]               │ info [o.e.c.r.a.AllocationService] [kibana-ci-immutable-ubuntu-16-tests-xl-1596093237568296121] current.health="GREEN" message="Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[date-nanos][0]]])." previous.health="YELLOW" reason="shards started [[date-nanos][0]]"
[00:02:03]               │ info [date_nanos] Created index "date-nanos"
[00:02:03]               │ debg [date_nanos] "date-nanos" settings {"index":{"number_of_replicas":"0","number_of_shards":"1"}}
[00:02:03]               │ info [o.e.c.m.MetadataMappingService] [kibana-ci-immutable-ubuntu-16-tests-xl-1596093237568296121] [date-nanos/dFcqzMk4Qm-qSKCoc2ukHg] update_mapping [_doc]
[00:02:03]               │ info [date_nanos] Indexed 9 docs into "date-nanos"
[00:02:03]               │ info [date_nanos] Indexed 2 docs into ".kibana"
[00:02:03]               │ info [o.e.c.m.MetadataMappingService] [kibana-ci-immutable-ubuntu-16-tests-xl-1596093237568296121] [.kibana_2/GTMHRa_AR--UOUx-7dwHKQ] update_mapping [_doc]
[00:02:03]               │ debg Migrating saved objects
[00:02:03]               │ proc [kibana]   log   [07:35:33.621] [info][savedobjects-service] Creating index .kibana_3.
[00:02:03]               │ info [o.e.c.m.MetadataCreateIndexService] [kibana-ci-immutable-ubuntu-16-tests-xl-1596093237568296121] [.kibana_3] creating index, cause [api], templates [], shards [1]/[1]
[00:02:03]               │ info [o.e.c.r.a.AllocationService] [kibana-ci-immutable-ubuntu-16-tests-xl-1596093237568296121] updating number_of_replicas to [0] for indices [.kibana_3]
[00:02:03]               │ info [o.e.c.r.a.AllocationService] [kibana-ci-immutable-ubuntu-16-tests-xl-1596093237568296121] current.health="GREEN" message="Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.kibana_3][0]]])." previous.health="YELLOW" reason="shards started [[.kibana_3][0]]"
[00:02:03]               │ proc [kibana]   log   [07:35:33.709] [info][savedobjects-service] Migrating .kibana_2 saved objects to .kibana_3
[00:02:03]               │ proc [kibana]   log   [07:35:33.719] [error][savedobjects-service] Error: Unable to migrate the corrupt Saved Object document index-pattern:test_index*. To prevent Kibana from performing a migration on every restart, please delete or fix this document by ensuring that the namespace and type in the document's id matches the values in the namespace and type fields.
[00:02:03]               │ info [o.e.c.m.MetadataMappingService] [kibana-ci-immutable-ubuntu-16-tests-xl-1596093237568296121] [.kibana_3/7ZOniGewS1u-M8G71aaOHQ] update_mapping [_doc]
[00:02:03]               │ info [o.e.c.m.MetadataMappingService] [kibana-ci-immutable-ubuntu-16-tests-xl-1596093237568296121] [.kibana_3/7ZOniGewS1u-M8G71aaOHQ] update_mapping [_doc]
[00:02:03]               │ info [o.e.c.m.MetadataMappingService] [kibana-ci-immutable-ubuntu-16-tests-xl-1596093237568296121] [.kibana_3/7ZOniGewS1u-M8G71aaOHQ] update_mapping [_doc]
[00:02:03]               │ info [o.e.c.m.MetadataMappingService] [kibana-ci-immutable-ubuntu-16-tests-xl-1596093237568296121] [.kibana_3/7ZOniGewS1u-M8G71aaOHQ] update_mapping [_doc]
[00:02:03]               │ proc [kibana]   log   [07:35:33.897] [info][savedobjects-service] Pointing alias .kibana to .kibana_3.
[00:02:03]               │ proc [kibana]   log   [07:35:33.958] [info][savedobjects-service] Finished in 339ms.
[00:02:03]               │ debg replacing kibana config doc: {"defaultIndex":"date-nanos"}
[00:02:04]               │ debg applying update to kibana config: {"context:defaultSize":"1","context:step":"3"}
[00:02:05]             └-> displays predessors - anchor - successors in right order 
[00:02:05]               └-> "before each" hook: global before each
[00:02:05]               │ debg browser.get(http://localhost:6121/app/discover#/context/date-nanos/AU_x3-TaGFA8no6Qj999Z?_a=(columns:!('@message')))
[00:02:05]               │ debg TestSubjects.exists(globalLoadingIndicator-hidden)
[00:02:05]               │ debg Find.existsByCssSelector('[data-test-subj="globalLoadingIndicator-hidden"]') with timeout=100000
[00:02:05]               │ debg browser[INFO] http://localhost:6121/app/discover?_t=1596094535724#/context/date-nanos/AU_x3-TaGFA8no6Qj999Z?_a=(columns:!(%27@message%27)) 341 Refused to execute inline script because it violates the following Content Security Policy directive: "script-src 'unsafe-eval' 'self'". Either the 'unsafe-inline' keyword, a hash ('sha256-P5polb1UreUSOe5V/Pv7tc+yeZuJXiOi/3fqhGsU7BE='), or a nonce ('nonce-...') is required to enable inline execution.
[00:02:05]               │
[00:02:05]               │ debg browser[INFO] http://localhost:6121/bundles/app/core/bootstrap.js 42:19 "^ A single error about an inline script not firing due to content security policy is expected!"
[00:02:07]               │ debg browser[INFO] http://localhost:6121/34906/bundles/kbn-ui-shared-deps/kbn-ui-shared-deps.js 452:106112 "INFO: 2020-07-30T07:35:36Z
[00:02:07]               │        Adding connection to http://localhost:6121/elasticsearch
[00:02:07]               │
[00:02:07]               │      "
[00:02:07]               │ debg TestSubjects.find(successorsLoadMoreButton)
[00:02:07]               │ debg Find.findByCssSelector('[data-test-subj="successorsLoadMoreButton"]') with timeout=10000
[00:02:07]               │ debg TestSubjects.find(predecessorsLoadMoreButton)
[00:02:07]               │ debg Find.findByCssSelector('[data-test-subj="predecessorsLoadMoreButton"]') with timeout=10000
[00:02:07]               │ debg --- retry.try error: loading context rows
[00:02:07]               │ERROR browser[SEVERE] http://localhost:6121/internal/search/es - Failed to load resource: the server responded with a status of 400 (Bad Request)
[00:02:07]               │ERROR browser[SEVERE] http://localhost:6121/internal/search/es - Failed to load resource: the server responded with a status of 400 (Bad Request)
[00:02:08]               │ERROR browser[SEVERE] http://localhost:6121/34906/bundles/kbn-ui-shared-deps/kbn-ui-shared-deps.js 413:78739 Error: Bad Request
[00:02:08]               │          at Fetch._callee3$ (http://localhost:6121/34906/bundles/core/core.entry.js:34:108535)
[00:02:08]               │          at l (http://localhost:6121/34906/bundles/kbn-ui-shared-deps/kbn-ui-shared-deps.js:368:155138)
[00:02:08]               │          at Generator._invoke (http://localhost:6121/34906/bundles/kbn-ui-shared-deps/kbn-ui-shared-deps.js:368:154891)
[00:02:08]               │          at Generator.forEach.e.<computed> [as next] (http://localhost:6121/34906/bundles/kbn-ui-shared-deps/kbn-ui-shared-deps.js:368:155495)
[00:02:08]               │          at fetch_asyncGeneratorStep (http://localhost:6121/34906/bundles/core/core.entry.js:34:101676)
[00:02:08]               │          at _next (http://localhost:6121/34906/bundles/core/core.entry.js:34:101992) "Possibly unhandled rejection: {\"request\":{},\"response\":{},\"body\":{\"statusCode\":400,\"error\":\"Bad Request\",\"message\":\"[illegal_argument_exception] date[+49714927-06-25T16:26:39.999Z] is after 2262-04-11T23:47:16.854775807 and cannot be stored in nanosecond resolution\",\"attributes\":{\"error\":{\"root_cause\":[{\"type\":\"illegal_argument_exception\",\"reason\":\"date[+49714927-06-25T16:26:39.999Z] is after 2262-04-11T23:47:16.854775807 and cannot be stored in nanosecond resolution\"}],\"type\":\"search_phase_execution_exception\",\"reason\":\"all shards failed\",\"phase\":\"query\",\"grouped\":true,\"failed_shards\":[{\"shard\":0,\"index\":\"date-nanos\",\"node\":\"G5yrxC_5S86gUB1C90AUhQ\",\"reason\":{\"type\":\"illegal_argument_exception\",\"reason\":\"date[+49714927-06-25T16:26:39.999Z] is after 2262-04-11T23:47:16.854775807 and cannot be stored in nanosecond resolution\"}}],\"caused_by\":{\"type\":\"illegal_argument_exception\",\"reason\":\"date[+49714927-06-25T16:26:39.999Z] is after 2262-04-11T23:47:16.854775807 and cannot be stored in nanosecond resolution\",\"caused_by\":{\"type\":\"illegal_argument_exception\",\"reason\":\"date[+49714927-06-25T16:26:39.999Z] is after 2262-04-11T23:47:16.854775807 and cannot be stored in nanosecond resolution\"}}}}},\"name\":\"Error\",\"req\":\"...\",\"res\":\"...\"}"
[00:02:08]               │ debg TestSubjects.find(successorsLoadMoreButton)
[00:02:08]               │ debg Find.findByCssSelector('[data-test-subj="successorsLoadMoreButton"]') with timeout=10000
[00:02:08]               │ debg TestSubjects.find(predecessorsLoadMoreButton)
[00:02:08]               │ debg Find.findByCssSelector('[data-test-subj="predecessorsLoadMoreButton"]') with timeout=10000
[00:02:08]               │ debg ... sleep(1000) start
[00:02:09]               │ debg ... sleep(1000) end
[00:02:09]               │ debg TestSubjects.find(docTable)
[00:02:09]               │ debg Find.findByCssSelector('[data-test-subj="docTable"]') with timeout=10000
[00:02:09]               │ info Taking screenshot "/dev/shm/workspace/kibana/test/functional/screenshots/failure/context app context view for date_nanos displays predessors - anchor - successors in right order .png"
[00:02:09]               │ info Current URL is: http://localhost:6121/app/discover#/context/date-nanos/AU_x3-TaGFA8no6Qj999Z?_a=(columns:!(%27@message%27))
[00:02:09]               │ info Saving page source to: /dev/shm/workspace/kibana/test/functional/failure_debug/html/context app context view for date_nanos displays predessors - anchor - successors in right order .html
[00:02:09]               └- ✖ fail: context app context view for date_nanos displays predessors - anchor - successors in right order 
[00:02:09]               │       Error: expected [ 'Sep 18, 2019 @ 06:50:12.999999999-3' ] to sort of equal [ 'Sep 18, 2019 @ 06:50:13.000000000-2',
[00:02:09]               │   'Sep 18, 2019 @ 06:50:12.999999999-3',
[00:02:09]               │   'Sep 19, 2015 @ 06:50:13.0001000011' ]
[00:02:09]               │       + expected - actual
[00:02:09]               │ 
[00:02:09]               │        [
[00:02:09]               │       +  "Sep 18, 2019 @ 06:50:13.000000000-2"
[00:02:09]               │          "Sep 18, 2019 @ 06:50:12.999999999-3"
[00:02:09]               │       +  "Sep 19, 2015 @ 06:50:13.0001000011"
[00:02:09]               │        ]
[00:02:09]               │       
[00:02:09]               │       at Assertion.assert (packages/kbn-expect/expect.js:100:11)
[00:02:09]               │       at Assertion.eql (packages/kbn-expect/expect.js:244:8)
[00:02:09]               │       at Context.<anonymous> (test/functional/apps/context/_date_nanos.js:57:33)
[00:02:09]               │       at process._tickCallback (internal/process/next_tick.js:68:7)
[00:02:09]               │ 
[00:02:09]               │ 

Stack Trace

{ Error: expected [ 'Sep 18, 2019 @ 06:50:12.999999999-3' ] to sort of equal [ 'Sep 18, 2019 @ 06:50:13.000000000-2',
  'Sep 18, 2019 @ 06:50:12.999999999-3',
  'Sep 19, 2015 @ 06:50:13.0001000011' ]
    at Assertion.assert (packages/kbn-expect/expect.js:100:11)
    at Assertion.eql (packages/kbn-expect/expect.js:244:8)
    at Context.<anonymous> (test/functional/apps/context/_date_nanos.js:57:33)
    at process._tickCallback (internal/process/next_tick.js:68:7)
  actual: '[\n  "Sep 18, 2019 @ 06:50:12.999999999-3"\n]',
  expected:
   '[\n  "Sep 18, 2019 @ 06:50:13.000000000-2"\n  "Sep 18, 2019 @ 06:50:12.999999999-3"\n  "Sep 19, 2015 @ 06:50:13.0001000011"\n]',
  showDiff: true }

Kibana Pipeline / kibana-xpack-agent / X-Pack API Integration Tests.x-pack/test/api_integration/apis/endpoint/resolver·ts.apis Endpoint plugin Resolver related alerts route endpoint events should return details for the root node

Link to Jenkins

Standard Out

Failed Tests Reporter:
  - Test has not failed recently on tracked branches

[00:00:00]       │
[00:00:00]         └-: apis
[00:00:00]           └-> "before all" hook
[00:13:24]           └-: Endpoint plugin
[00:13:24]             └-> "before all" hook
[00:13:24]             └-> "before all" hook
[00:13:31]               │ info [o.e.x.s.a.r.TransportPutRoleAction] [kibana-ci-immutable-ubuntu-16-tests-xl-1596093207871198888] updated role [fleet_enroll]
[00:13:31]               │ info [o.e.x.s.a.u.TransportPutUserAction] [kibana-ci-immutable-ubuntu-16-tests-xl-1596093207871198888] updated user [fleet_enroll]
[00:13:33]             └-: Resolver
[00:13:33]               └-> "before all" hook
[00:13:33]               └-> "before all" hook
[00:13:33]                 │ info [endpoint/resolver/api_feature] Loading "mappings.json"
[00:13:33]                 │ info [endpoint/resolver/api_feature] Loading "data.json.gz"
[00:13:33]                 │ info [o.e.c.m.MetadataCreateIndexService] [kibana-ci-immutable-ubuntu-16-tests-xl-1596093207871198888] [endgame-4.21.0-000001] creating index, cause [api], templates [], shards [5]/[0]
[00:13:34]                 │ info [endpoint/resolver/api_feature] Created index "endgame-4.21.0-000001"
[00:13:34]                 │ debg [endpoint/resolver/api_feature] "endgame-4.21.0-000001" settings {"index":{"lifecycle":{"name":"endgame_policy-4.21.0","rollover_alias":"endgame-4.21.0"},"mapping":{"ignore_malformed":"true","total_fields":{"limit":"10000"}},"number_of_replicas":"0","number_of_shards":"5","refresh_interval":"5s"}}
[00:13:34]                 │ info [endpoint/resolver/api_feature] Indexed 156 docs into "endgame-4.21.0-000001"
[00:13:34]                 │ info [o.e.c.m.MetadataCreateIndexService] [kibana-ci-immutable-ubuntu-16-tests-xl-1596093207871198888] [.ds-logs-endpoint.events.process-default-000001] creating index, cause [initialize_data_stream], templates [logs-endpoint.events.process], shards [1]/[1]
[00:13:34]                 │ info [o.e.c.m.MetadataCreateDataStreamService] [kibana-ci-immutable-ubuntu-16-tests-xl-1596093207871198888] adding data stream [logs-endpoint.events.process-default]
[00:13:34]                 │ info [o.e.c.m.MetadataCreateIndexService] [kibana-ci-immutable-ubuntu-16-tests-xl-1596093207871198888] [.ds-logs-endpoint.alerts-default-000001] creating index, cause [initialize_data_stream], templates [logs-endpoint.alerts], shards [1]/[1]
[00:13:34]                 │ info [o.e.c.m.MetadataCreateDataStreamService] [kibana-ci-immutable-ubuntu-16-tests-xl-1596093207871198888] adding data stream [logs-endpoint.alerts-default]
[00:13:34]                 │ info [o.e.x.i.IndexLifecycleTransition] [kibana-ci-immutable-ubuntu-16-tests-xl-1596093207871198888] moving index [.ds-logs-endpoint.events.process-default-000001] from [null] to [{"phase":"new","action":"complete","name":"complete"}] in policy [logs]
[00:13:34]                 │ info [o.e.x.i.IndexLifecycleTransition] [kibana-ci-immutable-ubuntu-16-tests-xl-1596093207871198888] moving index [.ds-logs-endpoint.alerts-default-000001] from [null] to [{"phase":"new","action":"complete","name":"complete"}] in policy [logs]
[00:13:34]                 │ info [o.e.x.i.IndexLifecycleTransition] [kibana-ci-immutable-ubuntu-16-tests-xl-1596093207871198888] moving index [.ds-logs-endpoint.events.process-default-000001] from [{"phase":"new","action":"complete","name":"complete"}] to [{"phase":"hot","action":"unfollow","name":"wait-for-indexing-complete"}] in policy [logs]
[00:13:34]               └-: related alerts route
[00:13:34]                 └-> "before all" hook
[00:13:34]                 └-: endpoint events
[00:13:34]                   └-> "before all" hook
[00:13:34]                   └-> should not find any alerts
[00:13:34]                     └-> "before each" hook: global before each
[00:13:34]                     │ info [o.e.x.i.IndexLifecycleTransition] [kibana-ci-immutable-ubuntu-16-tests-xl-1596093207871198888] moving index [.ds-logs-endpoint.alerts-default-000001] from [{"phase":"new","action":"complete","name":"complete"}] to [{"phase":"hot","action":"unfollow","name":"wait-for-indexing-complete"}] in policy [logs]
[00:13:34]                     └- ✓ pass  (19ms) "apis Endpoint plugin Resolver related alerts route endpoint events should not find any alerts"
[00:13:34]                   └-> should return details for the root node
[00:13:34]                     └-> "before each" hook: global before each
[00:13:34]                     └- ✖ fail: apis Endpoint plugin Resolver related alerts route endpoint events should return details for the root node
[00:13:34]                     │      Error: expected undefined to be truthy
[00:13:34]                     │       at Assertion.assert (/dev/shm/workspace/kibana/packages/kbn-expect/expect.js:100:11)
[00:13:34]                     │       at Assertion.ok (/dev/shm/workspace/kibana/packages/kbn-expect/expect.js:122:8)
[00:13:34]                     │       at Function.ok (/dev/shm/workspace/kibana/packages/kbn-expect/expect.js:531:15)
[00:13:34]                     │       at forEach (test/api_integration/apis/endpoint/resolver.ts:175:13)
[00:13:34]                     │       at Array.forEach (<anonymous>)
[00:13:34]                     │       at compareArrays (test/api_integration/apis/endpoint/resolver.ts:170:10)
[00:13:34]                     │       at Context.it (test/api_integration/apis/endpoint/resolver.ts:280:11)
[00:13:34]                     │ 
[00:13:34]                     │ 

Stack Trace

Error: expected undefined to be truthy
    at Assertion.assert (/dev/shm/workspace/kibana/packages/kbn-expect/expect.js:100:11)
    at Assertion.ok (/dev/shm/workspace/kibana/packages/kbn-expect/expect.js:122:8)
    at Function.ok (/dev/shm/workspace/kibana/packages/kbn-expect/expect.js:531:15)
    at forEach (test/api_integration/apis/endpoint/resolver.ts:175:13)
    at Array.forEach (<anonymous>)
    at compareArrays (test/api_integration/apis/endpoint/resolver.ts:170:10)
    at Context.it (test/api_integration/apis/endpoint/resolver.ts:280:11)

Kibana Pipeline / x-pack-intake-agent / X-Pack Jest Tests.x-pack/plugins/uptime/public/components/common/charts/__tests__.PingHistogram component renders the component without errors

Link to Jenkins

Standard Out

Failed Tests Reporter:
  - Test has failed 8 times on tracked branches: https://github.com/elastic/kibana/issues/73522


Stack Trace

Error: expect(received).toMatchSnapshot()

Snapshot name: `PingHistogram component renders the component without errors 1`

- Snapshot  - 1
+ Received  + 1

@@ -3,11 +3,11 @@
      class="euiTitle euiTitle--xsmall"
    >
      Pings over time
    </h2>,
    <div
-     aria-label="Bar Chart showing uptime status over time from a year ago to a year ago."
+     aria-label="Bar Chart showing uptime status over time from 2 years ago to 2 years ago."
      style="height:100%;opacity:1;transition:opacity 0.2s"
    >
      <div
        class="echChart"
      >
    at Object.it (/dev/shm/workspace/kibana/x-pack/plugins/uptime/public/components/common/charts/__tests__/ping_histogram.test.tsx:58:23)
    at Promise (/dev/shm/workspace/kibana/node_modules/jest-circus/build/utils.js:198:28)
    at new Promise (<anonymous>)
    at callAsyncCircusFn (/dev/shm/workspace/kibana/node_modules/jest-circus/build/utils.js:162:10)
    at _callCircusTest (/dev/shm/workspace/kibana/node_modules/jest-circus/build/run.js:205:40)
    at process._tickCallback (internal/process/next_tick.js:68:7)

and 1 more failures, only showing the first 3.

Build metrics

✅ unchanged

History

To update your PR or re-run it, just comment with:
@elasticmachine merge upstream

@roncohen
Copy link
Contributor

@restrry the use case is that Elastic Agent connects to Kibana and does long-polling . It does long-polling because it means Elastic Agent can react to user-initiated changes immediately and at the same time, we reduce the number of requests that Kibana would need to respond to if we were doing regular polling. Increasing the socket timeout greatly increases the number of Elastic Agents each Kibana instance can handle.

Can we help move this forward somehow?

@mshustov
Copy link
Contributor

mshustov commented Aug 3, 2020

@roncohen How soon you need it? As I can see @kobelb is going back tomorrow

@roncohen
Copy link
Contributor

@restrry @kobelb aiming for 7.10 for this, right?

@kobelb
Copy link
Contributor Author

kobelb commented Aug 10, 2020

@restrry @kobelb aiming for 7.10 for this, right?

Yup. However, we'll likely go with #73730 instead of this PR.

@mshustov
Copy link
Contributor

@kobelb can we close PR as #73730 merged?

@kobelb
Copy link
Contributor Author

kobelb commented Aug 19, 2020

@kobelb can we close PR as #73730 merged?

Yup, absolutely!

@kobelb kobelb closed this Aug 19, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants