Oracle introduces Patch Set Updates (PSU) for Database

On July 14th Oracle announced on MetaLink the release of a new patching strategy for the Oracle Database.

The new Patch Set Updates (PSU) will contain cumulative patches, which contain recommended bugfixes. They will be provided on a the same quarterly basis as the Critical Patch Update (CPU), therefore release months will be January, April, July and October. The Patch Set Update will be described in the release version. E.g. will be the first Patch Set Update (PSU), the second PSU etc.

As described in the release information, the PSU Patch (Patch 8576156) contains all the recommended patch bundles up to July 2009 (Generic, CRS, RAC, Services, DataGuard) as well as the Critical Patch Update July 2009. Moreover 5 additional critical bugfixes are included. OPatch version is required for installation of PSU and the PSU is rolling installable on RAC environments without downtime.

Later PSU patches can be installed on either the base release or on top of any previous PSU. For example, PSU can be installed on top of Base, PSU, PSU

As already mentioned, the customer has the option to install security patches only by installing the quarterly Critical Patch Update or to install security plus non-security bugfixes by installint the Patch Set Update (PSU). As the PSU already contains Critical Patch Update July 2009, the documentation states that future security patches are recommended to be installed not by CPU Patches but through PSU Patches.

Further information can be found in these MetaLink Notes:

854428.1 – Intro to Patch Set Updates (PSU)
850471.1 – Oracle Announces First Patch Set Update For Oracle Database Release 10.2
8576156.8 – Bug 8576156 Patch Set Update (PSU)
854473.1 – Known Issues with this Patch Set Update

Book review: HOWTO Secure and Audit Oracle 10g and 11g

I have added a new book review to my bookshelf: HOWTO Secure and Audit Oracle 10g and 11g – Ron Ben Natan

Book Review: Expert Oracle JDBC – J.R. Menon

I have added a new book review to my bookshelf: Expert Oracle JDBC – J.R. Menon.

Reclaimable Space Report – Segment Advisor

Today, I tried to get a nice, clean report about objects with reclaimable space from Segment Advisor. It is no problem to display the list in Enterprise Manager Grid|DB Control, but it is not so easy in SQL*Plus.

This is what i ended up with:

  segment_owner ,
  round(allocated_space/1024/1024) ALLOC_MB ,
  round(used_space/1024/1024) USED_MB ,
  round(reclaimable_space/1024/1024) RECLAIM_MB    ,
  (1-ROUND((used_space/allocated_space),2))*100 AS reclaim_pct
   FROM TABLE(dbms_space.asa_recommendations('TRUE', 'TRUE', 'FALSE'))
  WHERE tablespace_name IN ('TS_DATA')
AND segment_type         = 'TABLE'
AND segment_owner LIKE '%'
AND segment_name LIKE '%'
AND (reclaimable_space >= 1000000
         OR (((1-ROUND((used_space/allocated_space),2))*100)) > 30)
ORDER BY reclaimable_space DESC

Default 10gR2 RAC TNS Configuration can cause TCP Timeouts for application

The default RAC installation does normally not set “local_listener” init.ora parameter. If the listener is running on port 1521, then the database does not need the parameter in order to find and register with the local TNS listener process. However, if you have *not* set local_listener, it means that the database registers at the listener with the physical IP address instead of the virtual (vip) address.

You can determine if this happens when you take a look at “lsnrctl serv” output from your rac nodes:

Service "S_APP" has 1 instance(s).
  Instance "MDDB1", status READY, has 2 handler(s) for this service...
      "DEDICATED" established:0 refused:0 state:ready

Instead of ora-vm1.intra, this should be ora-vm1-vip.intra.

Why should I care?

If you use the default configuration, then you are using the parameter “REMOTE_LISTENER” and therefore Server Side Connect Time Load Balancing. This means, that the listeners of all nodes receive load information from all instances of all nodes and they can redirect connections to the least loaded instance, even if the instance is on another node. But the connect string they then send back to the client contains the physical IP address instead of the virtual.

In case of node crashes or kernel panics, etc. the client has to wait for the TCP timeout until this is detected.



    (ADDRESS = (PROTOCOL = TCP)(HOST = ora-vm1-vip.intra)(PORT = 1521))

    (ADDRESS = (PROTOCOL = TCP)(HOST = ora-vm2-vip.intra)(PORT = 1521))

alter system set local_listener = 'LISTENER_MDDB1' sid='MDDB1';
alter system set local_listener = 'LISTENER_MDDB2' sid='MDDB2';

Your experience with RAC Dynamic Remastering (DRM) in 10gR2?

One of my customers is having severe RAC performance issues, which appeared a dozen times so far. Each time, the performance impact lasted around 10 minutes and caused basically a hang of the application. ASH investigation revealed that the time frame of performance issues exactly matches a DRM operation of the biggest segment of the database. During the problematic time period, there are 20-50 instead of 5-10 active sessions and they are mostly waiting for gc related events: “gc buffer busy”,”gc cr block busy”, “gc cr block 2-way”, “gc current block 2-way”, “gc current request”, “gc current grant busy”, etc.

In addition, there is one single session which has wait event “kjbdrmcvtq lmon drm quiesce: ping completion” (on instance 1) and 1-3 sessions with wait event “gc remaster“. (on instance 2) The cur_obj# of the session waiting on “gc remaster” is pointing to the segment being remastered.

Does anybody have any experience with DRM problems with on Linux Itanium?

I know that it is possible to deactive DRM, but usually it should be beneficial to have it enabled. I could not find any reports of performance impact during DRM operation on metalink. Support is involved but clueless so far.



RAC Deadlock Detection not working on / Linux Itanium

We recently experienced a big issue, when the production database was hung. It turned out that the database has deadlocked, but the GES did not detect the deadlock situation, so all the sessions were waiting on “enq: TX row lock contention”.

We could provide a reproducible testcase and it turned out to be bug 7014855. The bug is platform specific to Linux Itanium port and a patch is available.

Monitoring Index Usage not reliable in

I recently had to analyze a system in respect of index utilization. The application has dozens of indexes and we have to find out, which ones are really needed and which ones are candidates for removal.

There is a nice feature, called Index Monitoring which switches a flag if the index is used.


Internally, a recursive SQL Statement is executed for every parsing of a SQL:

UPDATE object_usage SET flags=flags+1 WHERE obj#=:1 AND bitand(flags, 1)=0;

To check whether the index was used, you can query the view v$object_usage. Unfortunately, the view definition contains a where clause to list only indexes of the current schema.

   WHERE    io.owner# = userenv('SCHEMAID')

Therefore, you can create a new view or just omit the limitation from the where clause:

           SELECT u.name, io.name, t.name, 
           decode(bitand(i.flags, 65536), 0, 'NO', 'YES'), 
           decode(bitand(ou.flags, 1), 0, 'NO', 'YES'), 
         FROM sys.user$ u, sys.obj$ io, sys.obj$ t, sys.ind$ i, sys.object_usage ou 
         WHERE i.obj# = ou.obj# 
         AND io.obj# = ou.obj# 
         AND t.obj# = i.bo#
         AND u.user# = io.owner#

However, what is less known is that the DBMS_STATS can in some cases manipulate these flags. The issue is tracked with “Bug 6798910 – DBMS_STATS or EXPLAIN PLAN trigger index usage monitoring”. If you are using and you are gathering stats manually with “CASCADE=>TRUE”, then the monitoring flag is set to “YES”. Obviously, in this case you can not identify candidate indexes for removal.

Diagnosing “cursor: pin S wait on X”

I came across a metalink note (Note 786507.1) which describes a very short way to identify the blocker of a session waiting on mutex “cursor: pin S wait on X”. Previously, I thought it could only be analyzed with system state dumps, but the note describes that parameter P2RAW of wait event “cursor: pin S wait on X” contains the blocking session id and the refcount. For 32bit environments the field contains 4 bytes (2 bytes for session id, 2 bytes for refcount). On 64 bit environments, the field contains 8 bytes (4 vs. 4).

select p2raw from v$session where event = ‘cursor: pin S wait on X’

The first 2 or 4 bytes from p2raw have to be converted to dec and point to the blocking Session ID. (SID)

Oracle RAC bug when adding/dropping redo log groups

On the oracle-l list, there was a post about a nasty bug in Oracle RAC which can cause LGWR errors, instance failures and redo log corruptions.

The bug occurs if you add/remove redo log groups on a cluster database, which has some of it´s instances shut down.

More information about this Bug, which is tracked with Bug ID 6786022 can be found on MetaLink. There is a patch for and Linux x86-64 available.