ASSM Problem with too low PCTFREE

During the Hotsos Symposium 2009 Training Day, Jonathan Lewis presented a problem which appears even on current 10g/11g databases. What is especially interesting is how this issue can be diagnosed. I reproduced the problem in 11.1.0.7 and will provide the steps you can use to verify. The problem can be demonstrated best when comparing response time of an update statement for 8k blocksize and 16k blocksize.

  • 8k Blocksize
  • SQL> DROP TABLE t1_8k purge;
     
    SQL> CREATE TABLESPACE DEMO8K datafile SIZE 128M blocksize 8192;
     
    TABLESPACE created.
     
    Elapsed: 00:00:07.35
     
    SQL> CREATE TABLE mdecker.t1_8k
    (n1 NUMBER,
     n2 NUMBER)
     TABLESPACE DEMO8K;
     
    TABLE created.
     
    Elapsed: 00:00:00.02
     
    SQL> INSERT INTO t1_8k
    SELECT TRUNC(dbms_random.VALUE(10000000,100000000)) n1,
               TO_NUMBER(NULL) AS n2
        FROM dual
    CONNECT BY LEVEL <= 500000
    /
     
     
    500000 ROWS created.
     
    Elapsed: 00:00:06.89
     
    SQL> BEGIN dbms_stats.gather_table_stats(
            ownname => 'MDECKER',
            tabname => 'T1_8K',
            estimate_percent => 100);
    END;
    /
     
    PL/SQL PROCEDURE successfully completed.
     
    Elapsed: 00:00:02.13
     
    SQL> SELECT num_rows,blocks FROM dba_tables WHERE table_name = 'T1_8K';
     
      NUM_ROWS     BLOCKS
    ---------- ----------
        500000        874
     
    Elapsed: 00:00:00.15
    SQL> UPDATE t1_8k SET n2 = n1;
     
    500000 ROWS updated.
     
    Elapsed: 00:01:09.04
    SQL> COMMIT;

  • 16k Blocksize
  • SQL> CREATE TABLESPACE DEMO16K datafile SIZE 128M blocksize 16384;
     
    TABLESPACE created.
     
    Elapsed: 00:00:14.75
     
    SQL> CREATE TABLE mdecker.t1_16k
    (n1 NUMBER,
     n2 NUMBER)
     TABLESPACE DEMO16K;
     
    TABLE created.
     
    Elapsed: 00:00:00.03
     
    SQL> INSERT INTO t1_16k
    SELECT TRUNC(dbms_random.VALUE(10000000,100000000)) n1,
               TO_NUMBER(NULL) AS n2
        FROM dual
    CONNECT BY LEVEL <= 500000
    /  
     
     
    500000 ROWS created.
     
    Elapsed: 00:00:05.51
    SQL> BEGIN dbms_stats.gather_table_stats(
            ownname => 'MDECKER',
            tabname => 'T1_16K',
            estimate_percent => 100);
    END;
    /  
     
    PL/SQL PROCEDURE successfully completed.
     
    Elapsed: 00:00:01.78
    SQL> SELECT num_rows,blocks FROM dba_tables WHERE table_name = 'T1_16K';
     
      NUM_ROWS     BLOCKS
    ---------- ----------
        500000        436
     
    Elapsed: 00:00:00.07
     
    SQL>  UPDATE t1_16k SET n2 = n1;
     
    500000 ROWS updated.
     
    Elapsed: 00:20:20.89

    As you can see, the update statement for the 8k blocksize table took around 69 seconds whereas the same update for the table in the 16k tablespace took more than 20 minutes. When executing oradebug short_stack, you can see that for the 16k update, the stacktrace is similar for many executions. So, a lot of time is spent in the kernel functions ktspfsrch() and ktspscan_bmb().


    SQL> oradebug setospid 23668
    Oracle pid: 18, Unix process pid: 23668, image: oracle@ora-vm1.intra (TNS V1-V3)
    SQL> oradebug short_stack
    .....ktspfsrch()+559<-ktspscan_bmb()+315 .....
    SQL> oradebug short_stack
    .....ktspfsrch()+559<-ktspscan_bmb()+315 .....
    SQL> oradebug short_stack
    .....ktspfsrch()+559<-ktspscan_bmb()+315 .....

    It is important to understand that the problem is not necessarily related to the blocksize, but to the PCTFREE value. More information about this topic can be found here:

    http://structureddata.org/files/jl_test_case.html
    http://structureddata.org/2008/09/08/understanding-performance/
    http://www.oraclealchemist.com/oracle/hey-guys-does-size-matter/

3 comments
Leave a comment »

  1. […] If you want to see the test case Greg Rahn notes relating to the original posting I made to the OTN thread where Steve mentioned his blog example. He also had a copy of the test case, but this has gone missing after a site move and is now available here.] […]

  2. […] If you want to see the effects  Greg Rahn shows some results here and also has a variant of the test case (derived from the version I originally published on the OTN database forum) available here.. […]

  3. […] a 4KB or 8KB blocks size to a 16KB bblock size produced a performance disaster. A version of my  test case and some timing results are available on Greg Rahn’s […]

Leave Comment