Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Disable loading lookups by default in CompactionTask #16420

Merged

Conversation

Akshat-Jain
Copy link
Contributor

@Akshat-Jain Akshat-Jain commented May 9, 2024

Description

This PR updates CompactionTask to not load any lookups by default, unless transformSpec is present.

If transformSpec is present, we will make the decision based on context values, loading all lookups by default. This is done to ensure backward compatibility since transformSpec can reference lookups.
If transform spec is not present and no context value is passed, we donot load any lookup.

This behavior can be overridden by supplying lookupLoadingMode and lookupsToLoad in the task context.

Other changes/refactoring:

  1. Moved CTX_LOOKUP_LOADING_MODE and CTX_LOOKUPS_TO_LOAD constants from PlannerContext to LookupLoadingSpec, as PlannerContext wasn't available in the indexing module.
  2. Moved logic for evaluating the context and creating LookupLoadingSpec based on that to LookupLoadingSpec#getSpecFromContext.
  3. Updated Task#getLookupLoadingSpec to return LookupLoadingSpec.getSpecFromContext(getContext(), LookupLoadingSpec.ALL). The default ALL ensures that no existing behavior is broken, unless context has been overridden. Making this change in Task interface prevents us from having to make this change in a bunch of tasks that are spawned by CompactionTask.

Test Plan

  1. Tried manual compaction with all 3 partitioning modes (dynamic, range, hash) - validated that no lookups are loaded in the compaction task, as well as all the other spawned tasks.
  2. Repeated (1) with overriding task context with combination of lookupLoadingMode and lookupsToLoad - validated that the lookups in all tasks (compaction + spawned) were loaded as per the overridden context. This would help achieve any use-cases where lookups are needed during compaction.
  3. Validated that auto-compaction also doesn't load any lookups by default.
  4. Also tried ingestion in all 3 partitioning modes (dynamic, range, hash) to ensure this behavior is unaffected.
  5. Validated different types of MSQ queries and ingestion to make sure their behavior is unaffected.

This PR has:

  • been self-reviewed.
  • added documentation for new or modified features or behaviors.
  • a release note entry in the PR description.
  • added Javadocs for most classes and all non-trivial methods. Linked related entities via Javadoc links.
  • added or updated version, license, or notice information in licenses.yaml
  • added comments explaining the "why" and the intent of the code wherever would not be obvious for an unfamiliar reader.
  • added unit tests or modified existing tests to cover new code paths, ensuring the threshold for code coverage is met.
  • added integration tests.
  • been tested in a test Druid cluster.

@github-actions github-actions bot added Area - Batch Ingestion Area - Querying Area - Ingestion Area - MSQ For multi stage queries - https://github.com/apache/druid/issues/12262 labels May 9, 2024

// Return params: <context, default LookupLoadingSpec, expected LookupLoadingSpec>
Object[][] params = new Object[][]{
// Default spec is returned in the case of context not having the lookup keys.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if the intention was to name the set of config somehow; you could use: Named.of("default", ... ) to name parameters

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, I didn't get it. Could you please elaborate? Thanks!

Not sure if this helps, but the intention here was just to have parameterized test as it would be cleaner than having separate test methods for the different combinations. So provideParamsForTestCreateFromContext method is just supplying params to testGetLookupLoadingSpecFromContext test via @MethodSource("provideParamsForTestCreateFromContext").

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

that's great and fine ; but instead of leaving comments about a param's intentions - its better to name it with Named ; so a more descriptive name will be shown when the testcase is run - an SO example is here

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kgyrtkirk I see.
The above test was refactored into non-parameterized tests, so this should be fine, but will keep that in mind for future code changes.

Copy link
Contributor

@kfaraz kfaraz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Left one test related comment, rest of the changes look good.

@kfaraz
Copy link
Contributor

kfaraz commented May 13, 2024

Changes look good to me. @kgyrtkirk , @cryptoe , do you have any further feedback?

Copy link
Contributor

@cryptoe cryptoe left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The description seems outdated. Can you please update @Akshat-Jain

@cryptoe cryptoe merged commit ddfd62d into apache:master May 15, 2024
86 of 87 checks passed
@gianm
Copy link
Contributor

gianm commented May 17, 2024

This patch failed on the group cds-task-schema-publish-disabled because it took too long (6h) and was canceled. I don't see such a failure on previous PRs, and I do see it on a bunch of later PRs (#16457, #16252, #16366). Is it possible this patch did something to make that group take a long longer than it used to? In the past, it has taken about 1.5h.

@Akshat-Jain
Copy link
Contributor Author

The cds check passed on the penultimate commit on this PR (https://github.com/apache/druid/actions/runs/9061792112/job/24906027048), and the latest commit didn't have anything other than updating an error message in a couple of files (e6e92a4).

Also this PR doesn't touch the cds part at all, hence was merged.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Area - Batch Ingestion Area - Ingestion Area - MSQ For multi stage queries - https://github.com/apache/druid/issues/12262 Area - Querying
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants