Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ddl: Fix unstable DROP TABLE/FLASHBACK TABLE/RECOVER TABLE (#8422) #8443

Conversation

ti-chi-bot
Copy link
Member

This is an automated cherry-pick of #8422

What problem does this PR solve?

Issue Number: close #8395, close #1664, close #3777

Problem Summary:

Problem 1:
There could be a chance that raft snapshot or raft command of a table comes after it has been dropped.
Because tiflash will ignore the previous DROP TABLE, because storage instance is not created yet. The storage instance is created as "non tombstone" after that when raft snapshot or raft command comes. And the storage instance will never be physically dropped until tiflash restarts.

Problem 2:
In the second time when syncTableSchema calls trySyncTableSchema after table-id-mapping is up-to-date, it should use mvcc get to ensure it can use the table schema of tombstone table to create Storage instance or decode new raft logs, or some new columns data will not be decoded.
Previously updateTiFlashReplica will update the table info with the latest columns by accident. So tiflash pass the related tests sometimes.

CREATE TABLE t (a int, b int)
INSERT INTO t VALUES (1, 1);
ALTER TABLE t SET TIFLASH REPLICA 1;
# wait tiflash replica ready

ALTER TABLE t ADD COLUMN c int;
INSERT INTO t VALUES (1,2,3);
DROP TABLE t;

# the rows (1,2,3) comes after `drop table t` is executed in tiflash
RECOVER TABLE t; -- or FLASHBACK TABLE t

SELECT * FROM t;
# should return rows (1, 1, NULL), (1,2,3);

What is changed and how it works?

This is a PR following #8421
The logic changes is these commits: https://github.com/pingcap/tiflash/pull/8422/files/d88f6b6f4ae4c8026b969cd9c5ae50924b179529..1fe991997055f755aea89cd2e2bdb8ab26a848bf

  • In the first time when syncTableSchema calls trySyncTableSchema, it will not use mvcc get so that we can detect whether we need to update the table-id-mapping
  • In the second time when syncTableSchema calls trySyncTableSchema after table-id-mapping is up-to-date, it will use mvcc get to ensure it can use the table schema of tombstone table to create Storage instance or decode new raft logs.
  • SchemaGetter::getTableInfoImpl does not check the existence of db_key, so that we can still get the table info after database is dropped. (get ready for FLASHBACK DATABASE ... TO ...)
  • If a storage instance is created with the table info got by client-c mvcc get, then we will create the storage instance with a tombstone timestamp. So that it can be physically dropped after GC time.
  • applySetTiFlashReplica should only update the tiflash replica info instead of replacing all the table info, or it will lead to some later DDLs (changing partitions, etc) is not executed
  • Simplify recover table codes using early return

Check List

Tests

  • Unit test
  • Integration test
  • Manual test (add detailed scripts or steps below)
  • No code

Side effects

  • Performance regression: Consumes more CPU
  • Performance regression: Consumes more Memory
  • Breaking backward compatibility

Documentation

  • Affects user behaviors
  • Contains syntax changes
  • Contains variable changes
  • Contains experimental features
  • Changes MySQL compatibility

Release note

Fix a potential issue that some data can not be recovered by `RECOVER TABLE` and `FLASHBACK TABLE`

Signed-off-by: ti-chi-bot <ti-community-prow-bot@tidb.io>
Copy link
Contributor

ti-chi-bot bot commented Nov 30, 2023

This cherry pick PR is for a release branch and has not yet been approved by triage owners.
Adding the do-not-merge/cherry-pick-not-approved label.

To merge this cherry pick:

  1. It must be approved by the approvers firstly.
  2. AFTER it has been approved by approvers, please wait for the cherry-pick merging approval from triage owners.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Copy link
Contributor

ti-chi-bot bot commented Nov 30, 2023

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign jiaqizho for approval. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@JaySon-Huang
Copy link
Contributor

will not backport to release-6.1 because it rely on "Refactoring the DDL framework" that change lots of codes

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants