Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PERF-1885 unique index benchmark #1204

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open

PERF-1885 unique index benchmark #1204

wants to merge 1 commit into from

Conversation

wh5a
Copy link

@wh5a wh5a commented Apr 28, 2024

Jira Ticket: PERF-1885

Whats Changed

A new workload that tests performance of unique indexes, in particular concurrent insertion of duplicate keys. It also tests concurrent update and deletion of documents using the indexes, starting from 8 threads up to 128 threads.

Patch Testing Results

https://spruce.mongodb.com/version/662d950c48f968000712eaf6/tasks?sorts=STATUS%3AASC%3BBASE_STATUS%3ADESC

@wh5a wh5a requested a review from louiswilliams April 28, 2024 01:09
@wh5a wh5a requested review from a team as code owners April 28, 2024 01:09
@wh5a wh5a requested a review from ghartnett April 28, 2024 01:09
Copy link
Collaborator

@thessem thessem left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Only leaving a review for docs/using.md as that's the file my team is marked as owning. Looks good to me.

ThrowOnFailure: false
Operations:
- &InsertOp
OperationName: insertMany
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In thinking about diagnosability and debugability, I worry that there are a few too many variables that might affect the latency of this phase, which would make it hard to interpret performance changes:

  1. The latency of a successful insert
  2. The latency of an unsuccessful insert due to an already-present key
  3. The latency of an unsuccessful insert that fails due to a concurrent insertion of the same key

We can isolate these distinct behaviors by separating them into different phases. We can add an uncontended load phase to handle case 1, free of any duplicate key errors or concurrent writes. We can also isolate cases 2 & 3 by running an insert phase with a single thread.

Additionally, we could further isolate the behavior here by having one phase that only inserts duplicate keys.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants