Datapump Export suffering from Oracle Row Migration

Recently, i was troubleshooting a datapump export duration problem. Over the period of 18 months, the duration of a multi-table export increased dramatically. A quick analysis showed that the export duration was mainly dependent on one big table. This 50 GB table with no BLOB/CLOB/LONG datatypes took more than 4 hours on a modern system wheras it should not take more than 10 – 15 minutes. The system was performing ONLY single-block I/O requests (db file sequential read). I found this strange and started investigating.

Beginning with 11g, you can enable sql_trace (10046) for datapump with the new syntax:

ALTER system SET events 'sql_trace {process : pname = dw | pname = dm} level=12';

The trace showed that there were dozens of consecutive single-block I/O requests against the SAME data block:

WAIT #47053877709656: nam='db file sequential read' ela= 344 file#=41 block#=2556595 blocks=1 obj#=120874 tim=1347023848770469
WAIT #47053877709656: nam='db file sequential read' ela= 6 file#=41 block#=2556595 blocks=1 obj#=120874 tim=1347023848770504
WAIT #47053877709656: nam='db file sequential read' ela= 5 file#=41 block#=2556595 blocks=1 obj#=120874 tim=1347023848770526
WAIT #47053877709656: nam='db file sequential read' ela= 5 file#=41 block#=2556595 blocks=1 obj#=120874 tim=1347023848770548
WAIT #47053877709656: nam='db file sequential read' ela= 4 file#=41 block#=2556595 blocks=1 obj#=120874 tim=1347023848770570
WAIT #47053877709656: nam='db file sequential read' ela= 5 file#=41 block#=2556595 blocks=1 obj#=120874 tim=1347023848770593
WAIT #47053877709656: nam='db file sequential read' ela= 5 file#=41 block#=2556595 blocks=1 obj#=120874 tim=1347023848770617
WAIT #47053877709656: nam='db file sequential read' ela= 4 file#=41 block#=2556595 blocks=1 obj#=120874 tim=1347023848770639

No wonder that this does not perform well, right? Next was a block dump of this interesting block from file 52 / block 2056439.

SELECT * FROM dba_extents WHERE file_id = 41 AND 2556595 BETWEEN block_id AND block_id+blocks-1;
ALTER system dump datafile 41 block 2556595;

In the resulting trace file, I verified that i dumped the block from the correct table segment. (Hex 0x1d82a is Dec 120874)

buffer tsn: 15 rdba: 0x0a6702b3 (41/2556595)
frmt: 0x02 chkval: 0x57f3 TYPE: 0x06=trans DATA
seg/obj: 0x1d82a csc: 0x0c.1c3abc94 itc: 20 flg: E typ: 1 - DATA

Then I had a closer look at the flag bytes.

The flags were:

  • H: Head Piece
  • F: First Piece
  • L: Last Piece

Normally, “H-FL” is shown to indicate that the whole row is located in one rowpiece in the same block. In this block dump, you can see dozens of rows with “—-FL–” which means that the head row piece is not here and you see no Head-Piece-only Row Pieces  “–H—–” . This shows that this segment has suffered from heavy row migration. Probably, this was caused by adding columns after the table creation.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
SID_ora_999.trc | grep "^tl: "
tl: 97 fb: ----FL-- lb: 0x0 cc: 20
tl: 88 fb: --H-FL-- lb: 0x0 cc: 20
tl: 97 fb: ----FL-- lb: 0x0 cc: 20
tl: 88 fb: --H-FL-- lb: 0x0 cc: 20
tl: 98 fb: ----FL-- lb: 0x2 cc: 20
tl: 89 fb: --H-FL-- lb: 0x2 cc: 20
tl: 99 fb: ----FL-- lb: 0x2 cc: 20
tl: 90 fb: --H-FL-- lb: 0x2 cc: 20
tl: 99 fb: ----FL-- lb: 0x2 cc: 20
tl: 90 fb: --H-FL-- lb: 0x2 cc: 20
tl: 99 fb: ----FL-- lb: 0x2 cc: 20
tl: 90 fb: --H-FL-- lb: 0x2 cc: 20
tl: 99 fb: ----FL-- lb: 0x2 cc: 20
tl: 90 fb: --H-FL-- lb: 0x2 cc: 20
tl: 99 fb: ----FL-- lb: 0x2 cc: 20
tl: 90 fb: --H-FL-- lb: 0x2 cc: 20
tl: 90 fb: --H-FL-- lb: 0x2 cc: 20
tl: 99 fb: ----FL-- lb: 0x2 cc: 20
tl: 90 fb: --H-FL-- lb: 0x2 cc: 20
tl: 99 fb: ----FL-- lb: 0x2 cc: 20
tl: 90 fb: --H-FL-- lb: 0x2 cc: 20
tl: 99 fb: ----FL-- lb: 0x2 cc: 20
tl: 90 fb: --H-FL-- lb: 0x2 cc: 20
tl: 99 fb: ----FL-- lb: 0x2 cc: 20
tl: 90 fb: --H-FL-- lb: 0x2 cc: 20
tl: 99 fb: ----FL-- lb: 0x2 cc: 20
tl: 90 fb: --H-FL-- lb: 0x2 cc: 20
tl: 99 fb: ----FL-- lb: 0x2 cc: 20
tl: 90 fb: --H-FL-- lb: 0x2 cc: 20
tl: 99 fb: ----FL-- lb: 0x2 cc: 20
tl: 90 fb: --H-FL-- lb: 0x2 cc: 20
tl: 99 fb: ----FL-- lb: 0x2 cc: 20
tl: 90 fb: --H-FL-- lb: 0x2 cc: 20
tl: 90 fb: --H-FL-- lb: 0x2 cc: 20
tl: 99 fb: ----FL-- lb: 0x2 cc: 20
tl: 90 fb: --H-FL-- lb: 0x2 cc: 20
tl: 99 fb: ----FL-- lb: 0x2 cc: 20
tl: 90 fb: --H-FL-- lb: 0x2 cc: 20
tl: 99 fb: ----FL-- lb: 0x2 cc: 20
tl: 89 fb: --H-FL-- lb: 0x2 cc: 20
tl: 99 fb: ----FL-- lb: 0x2 cc: 20
tl: 89 fb: --H-FL-- lb: 0x2 cc: 20
tl: 99 fb: ----FL-- lb: 0x2 cc: 20
tl: 89 fb: --H-FL-- lb: 0x2 cc: 20
tl: 99 fb: ----FL-- lb: 0x2 cc: 20
tl: 89 fb: --H-FL-- lb: 0x2 cc: 20
tl: 99 fb: ----FL-- lb: 0x2 cc: 20
tl: 89 fb: --H-FL-- lb: 0x2 cc: 20
tl: 99 fb: ----FL-- lb: 0x2 cc: 20
tl: 89 fb: --H-FL-- lb: 0x2 cc: 20
tl: 99 fb: ----FL-- lb: 0x2 cc: 20
tl: 90 fb: --H-FL-- lb: 0x2 cc: 20
tl: 90 fb: --H-FL-- lb: 0x2 cc: 20
tl: 99 fb: ----FL-- lb: 0x2 cc: 20
tl: 90 fb: --H-FL-- lb: 0x2 cc: 20
tl: 99 fb: ----FL-- lb: 0x2 cc: 20
tl: 90 fb: --H-FL-- lb: 0x2 cc: 20
tl: 99 fb: ----FL-- lb: 0x2 cc: 20
tl: 90 fb: --H-FL-- lb: 0x2 cc: 20
tl: 99 fb: ----FL-- lb: 0x2 cc: 20
tl: 90 fb: --H-FL-- lb: 0x2 cc: 20
tl: 99 fb: ----FL-- lb: 0x2 cc: 20
tl: 90 fb: --H-FL-- lb: 0x2 cc: 20
tl: 99 fb: ----FL-- lb: 0x2 cc: 20
tl: 90 fb: --H-FL-- lb: 0x2 cc: 20
tl: 99 fb: ----FL-- lb: 0x2 cc: 20
tl: 90 fb: --H-FL-- lb: 0x2 cc: 20
tl: 99 fb: ----FL-- lb: 0x2 cc: 20
tl: 90 fb: --H-FL-- lb: 0x2 cc: 20
tl: 90 fb: --H-FL-- lb: 0x2 cc: 20
tl: 99 fb: ----FL-- lb: 0x2 cc: 20

With datapump there are 2 different access methods: EXTERNAL_TABLE and DIRECT_PATH. Usually, the datapump utility decides on it´s own which method to use. It turned out that with EXTERNAL_TABLE, the table export takes only 10 minutes and does not perform these single-block I/O. It only appears with DIRECT_PATH.

So, the next step I recommended was to reorganize the table to get rid of row migration and evaluate whether PCTFREE should be increased. Afterwards export durations are back to normal.

Leave Comment

Print This Post Print This Post