-
Notifications
You must be signed in to change notification settings - Fork 1
ORACLE Delete data
GradedJestRisk edited this page Dec 14, 2018
·
7 revisions
Overview:
- selecting rows;
- checking internal and external constraints (mare sure to index FK targeting the table, otherwise read this);
- maintain indexes;
- feed undo/redo segment;
- feed temp segment with resultSet;
- fragmented space afterwards (in data and index) causing scattered reads.
Use CTAS and rename. A con may be to backup all indexes/constraints using dbms_metadata.get_ddl.
Consider partitioning table on this column, and use ALTER TABLE DROP PARTITION UPDATE INDEXES;
Overview:
* create index to speed up selection; * select ROWID, and issue DELETE statement against ROWID; * use FETCH BULK COLLECT LIMIT 10000 (..) FORALL to limit context switches - COMMIT at each LOOP; * try disabling internal and external constraints, re-activating them afterwards (check for ROI); * consider making index unusable, rebuilding them afterwards (check for ROI) - this will avoid scattered reads; * consider shrinking table afterwards.
When a lot of rows are deleted:
- easy way (do NOT lock table): SHRINK SPACE COMPACT
- hard way (lock table): MOVE TABLE
- manual way (remove table and constraints): INSERT INTO SELECT FROM
One of the commonest complaints about bulk inserts and bulk deletes on the OTN database forum used to be the amount of time spent on “db file sequential read” waits; execution plans (in Oracle) don’t tell us anything about the overheads of index maintenance so it’s very easy to forget that a big delete can do a lot of slow, random I/O. (It’s interesting to note that plans in SQL Server do tell you which indexes you have to maintain as you delete.)OldDon