Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore: prevent multiple room finds during requestRoom method #32363

Draft
wants to merge 13 commits into
base: develop
Choose a base branch
from

Conversation

ggazzo
Copy link
Member

@ggazzo ggazzo commented May 2, 2024

Proposed changes (including videos or screenshots)

Changing the order in which things happen can help avoid testing the same things multiple times.

Previously, we would test if the system had an agent online, then find who was online, and then assign once again. These actions end up being repetitive, making the system difficult to understand and consuming CPU.

We also avoid calling the same collection multiple times to update different fields on the same document, thus optimizing messages sent to the database.

Database operations( find/update) GET http://localhost:3000/api/v1/livechat/room?token=${visitor.token}
88 versus 76

With that, the order of events has changed a bit, but I believe it's not something problematic.

old:

 run livechat.applyRoomRestrictions 
 run livechat.onCheckRoomApiParams 
 run livechat.checkDefaultAgentOnNewRoom 
 run livechat.beforeRoom 
 run livechat.newRoom 
 run afterSaveMessage 
 run beforeGetMentions 
 run beforeGetMentions 
 run beforeSendMessageNotifications 
 run livechat.beforeInquiry 
 run livechat.beforeDelegateAgent 
 run livechat.chatQueued 
 run livechat.afterInquiryQueued 
 run renderNotification 
 run livechat.beforeRouteChat

new


 run livechat.applyRoomRestrictions
 run livechat.onCheckRoomApiParams
 run livechat.checkDefaultAgentOnNewRoom
 run livechat.beforeDelegateAgent
 run livechat.beforeRoom
 run livechat.newRoom
 run afterSaveMessage
 run beforeGetMentions
 run beforeGetMentions
 run beforeSendMessageNotifications
 run livechat.beforeInquiry
 run livechat.new-beforeRouteChat
 run livechat.afterInquiryQueued
 run livechat.chatQueued
 run renderNotification

Issue(s)

          /\      |‾‾| /‾‾/   /‾‾/
     /\  /  \     |  |/  /   /  /
    /  \/    \    |     (   /   ‾‾\
   /          \   |  |\  \ |  (‾)  |
  / __________ \  |__| \__\ \_____/ .io

     execution: local
        script: create-room-omnichannel.js
        output: Prometheus remote write (http://localhost:9090/api/v1/write)

     scenarios: (100.00%) 1 scenario, 1 max VUs, 1m0s max duration (incl. graceful stop):
              * default: 1 looping VUs for 30s (gracefulStop: 30s)


     █ setup

     data_received..................: 2.1 MB 70 kB/s
     data_sent......................: 243 kB 8.1 kB/s
     http_req_blocked...............: avg=7.13µs  min=1µs    med=2µs     max=5.51ms   p(90)=3µs     p(95)=3µs
     http_req_connecting............: avg=2.68µs  min=0s     med=0s      max=3.22ms   p(90)=0s      p(95)=0s
     http_req_duration..............: avg=24.9ms  min=4.76ms med=31.63ms max=100.54ms p(90)=46.93ms p(95)=49.92ms
       { expected_response:true }...: avg=24.9ms  min=4.76ms med=31.63ms max=100.54ms p(90)=46.93ms p(95)=49.92ms
     http_req_failed................: 0.00%  ✓ 0         ✗ 1200
     http_req_receiving.............: avg=52.31µs min=27µs   med=47µs    max=736µs    p(90)=68µs    p(95)=76µs
     http_req_sending...............: avg=12.8µs  min=7µs    med=12µs    max=86µs     p(90)=16µs    p(95)=19µs
     http_req_tls_handshaking.......: avg=0s      min=0s     med=0s      max=0s       p(90)=0s      p(95)=0s
     http_req_waiting...............: avg=24.83ms min=4.72ms med=31.56ms max=100.48ms p(90)=46.83ms p(95)=49.87ms
     http_reqs......................: 1200   39.979253/s
     iteration_duration.............: avg=49.92ms min=875ns  med=48.9ms  max=107.63ms p(90)=57.58ms p(95)=63.55ms
     iterations.....................: 600    19.989627/s
     vus............................: 1      min=1       max=1
     vus_max........................: 1      min=1       max=1


running (0m30.0s), 0/1 VUs, 600 complete and 0 interrupted iterations
default ✓ [======================================] 1 VUs  30s

after:

          /\      |‾‾| /‾‾/   /‾‾/
     /\  /  \     |  |/  /   /  /
    /  \/    \    |     (   /   ‾‾\
   /          \   |  |\  \ |  (‾)  |
  / __________ \  |__| \__\ \_____/ .io

     execution: local
        script: create-room-omnichannel.js
        output: Prometheus remote write (http://localhost:9090/api/v1/write)

     scenarios: (100.00%) 1 scenario, 1 max VUs, 1m0s max duration (incl. graceful stop):
              * default: 1 looping VUs for 30s (gracefulStop: 30s)


     █ setup

     data_received..................: 3.2 MB 106 kB/s
     data_sent......................: 365 kB 12 kB/s
     http_req_blocked...............: avg=5µs     min=1µs    med=2µs     max=5.56ms   p(90)=2µs     p(95)=3µs
     http_req_connecting............: avg=2.55µs  min=0s     med=0s      max=4.6ms    p(90)=0s      p(95)=0s
     http_req_duration..............: avg=16.55ms min=4.1ms  med=19.01ms max=109.15ms p(90)=29.89ms p(95)=32.56ms
       { expected_response:true }...: avg=16.55ms min=4.1ms  med=19.01ms max=109.15ms p(90)=29.89ms p(95)=32.56ms
     http_req_failed................: 0.00%  ✓ 0         ✗ 1804
     http_req_receiving.............: avg=46.38µs min=23µs   med=44µs    max=156µs    p(90)=60µs    p(95)=66.84µs
     http_req_sending...............: avg=11.78µs min=5µs    med=11µs    max=302µs    p(90)=15µs    p(95)=16.84µs
     http_req_tls_handshaking.......: avg=0s      min=0s     med=0s      max=0s       p(90)=0s      p(95)=0s
     http_req_waiting...............: avg=16.49ms min=4.05ms med=18.94ms max=109.08ms p(90)=29.84ms p(95)=32.49ms
     http_reqs......................: 1804   60.071478/s
     iteration_duration.............: avg=33.24ms min=584ns  med=31.89ms max=123.58ms p(90)=39.34ms p(95)=43.2ms
     iterations.....................: 902    30.035739/s
     vus............................: 1      min=1       max=1
     vus_max........................: 1      min=1       max=1


running (0m30.0s), 0/1 VUs, 902 complete and 0 interrupted iterations
default ✓ [======================================] 1 VUs  30s

Steps to test or reproduce

Further comments

https://rocketchat.atlassian.net/browse/CORE-440

Copy link
Contributor

dionisio-bot bot commented May 2, 2024

Looks like this PR is not ready to merge, because of the following issues:

  • This PR is missing the 'stat: QA assured' label
  • This PR is missing the required milestone or project

Please fix the issues and try again

If you have any trouble, please check the PR guidelines

Copy link

changeset-bot bot commented May 2, 2024

⚠️ No Changeset found

Latest commit: a62c7eb

Merging this PR will not cause a version bump for any packages. If these changes should not result in a new version, you're good to go. If these changes should result in a version bump, you need to add a changeset.

This PR includes no changesets

When changesets are added to this PR, you'll see the packages that this PR includes changesets for and the associated semver types

Click here to learn what changesets are, and how to add one.

Click here if you're a maintainer who wants to add a changeset to this PR

Copy link

codecov bot commented May 2, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 55.55%. Comparing base (7d5bdde) to head (a62c7eb).

Additional details and impacted files

Impacted file tree graph

@@           Coverage Diff            @@
##           develop   #32363   +/-   ##
========================================
  Coverage    55.55%   55.55%           
========================================
  Files         2406     2402    -4     
  Lines        52882    52833   -49     
  Branches     10861    10855    -6     
========================================
- Hits         29376    29350   -26     
+ Misses       20900    20882   -18     
+ Partials      2606     2601    -5     
Flag Coverage Δ
e2e 54.78% <ø> (-0.01%) ⬇️
unit 73.58% <ø> (+0.01%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

1 participant