xPath vs DOM Comparison for XML Parsing

This is a little comparison between XPath and DOM. I hope this will help you to determine which method is better suited for your situation.

Complexity: XPath is much simpler that DOM. XPath can also be used in SQL directly while DOM you must write a stored procedure.

Flexibility: DOM is much flexible than XPath. You can read an entire XML document without knowing any specifics about the document upfront. In XPath, knowledge of the document design is required.

Speed: In our test on on Linux, DOM was 3 times as fast as XPath when parsing 500,000 XML documents.


Change Start of Week for Saudi Arabia

In the Kingdom, week starts on Saturday (not Sunday like the USA):

There are 2 ways to achieve this in


First Method:

alter session set nls_TERRITORY ='SAUDI ARABIA';
select   sysdate, to_char(sysdate, 'Day') "Day Name", 
         to_char(sysdate, 'D') "Day Number", 
         trunc(sysdate ,'D')"Beginning of Week"  
from dual;

next_day alter session

Sysdate is Feb 23, 2016 which is Tuesday and 4th day of the week.  If in America territory, Tuesday would’ve been the 3rd day of the week.


Beginning of the week is Saturday, Feb 20, 2016. If in America territory, the date would’ve been Sunday, Feb 21, 2016


Second Method:

select sysdate, next_day(sysdate - 7, 'SATURDAY') from dual;




Notice there is no “alter session” which makes it a bit easier.


Hope this helps

Storing XML Documents in XMLTYPE or VARCHAR Column Type

You can store XML in your table using XMLTYPE column data type or the good old VARCHAR2 provided that XML document is no more than 4000 characters.

I made a quick comparison between both options to help you access which option is a better choice for your situation.

Test environment: Oracle, 64-bit running on Red Hat Linux 64-bit.

Test case setup: Created 2 tables, each table has a single column only. One table with column type VARCHAR2 (4000) while the other table XMLTYPE column type.

Test Case Result Comment
Space utilization Insert 65,000 rows XML rows in both tables VARCHAR table Size: 520 MB.

XML table Size: 168 MB. Note that are an additional LOB segment and a LOB index segments created for XML type.

Storing XML data in column type XMLTYPE saved 3X space as compared to VARCHAR table. That is expected because XMLTYPE is stored in binary format.
Syntax Checking Insert data with incorrect XML format XMLTYPE table rejected the row, while VARCHAR table accepted data This might be viewed as positive or negative point depending on your situation.
Insert Performance Insert 4,000 rows sequentially.  A single commit at the end. VARCHAR table: 0.3 sec

XMLTYPE table: 1.3 sec

VARCHAR table 3X faster than XMLTYPE.

None of the tables had indexes.

Querying table  based on an value within a XML document Create index on xml table with “extractvalue” option.

No index created on VARCHAR table

XML is always faster that VARCHAR table in retrieving data There is so much flexibility and performance gains in retrieving data with XMLTYPE column type than VARCHAR table

Hope this helps


ORA-00821 on RAC with ASM


Database went down and you can’t bring it up because of this unwelcomed error:

ORA-00821: Specified value of sga_target M is too small


Further, “startup nomount” still gives the same error and you can’t alter the spfile and adjust the value that caused this violation.


Here are the steps I follow to fix this issue, these steps have been tested on Oracle RAC with ASM 64 bit on Redhat Linux AS V. 5.

Summarized steps to recover:

  1. Create pfile from spfile
  2. Edit pfile and adjust value causing this issue
  3. Start the database up with nomount option
  4. Create spfile from pfile
  5. Adjust ASM alias
  6. Bounce the database
  7. Remove old spfile from ASM


Steps should be on a single node only.


1)      Create pfile from spfile

create pfile=’/u02/backups/dbatst/initdbatst.ora.new’ from spfile=’+data01/dbatst/spfiledbatst.ora’;


For pfile, you need to specify a location, otherwise it will overwrite your pfile in $ORACLE_HOME/dbs, which will further complicate your recovery.


 For spfile, you need to find where your spfile file is on ASM, remember your database is not even mounted, so Oracle is totally lost as where the location of database’s spfile is.


2)      Edit pfile

 Now change value causing violation.


3)    Start the database up with nomount option

Startup nomount pfile=’/u02/backups/dbatst/initdbatst.ora.new’

 This should bring up the database in nomount stage.


4)      Create spfile from pfile

create spfile=’+DATA01′ from pfile=’/u02/backups/dbatst/initdbatst.ora’;

Notice now that in nomount stage, Oracle recognizes where to put the spfile and you don’t need to specify the full ASM path to spfile.

spfile Location: <+dg>/database name/parameterfile

In my situation it will be ‘+DATA01/dbatst/parameterfile’

Also note that Oracle will create a new spfile without overwriting the old one. You should delete the old spfile as indicated later.


5)      Adjust ASM alias

“Alias” in ASM is like link “ln” operating system command.


The location of spfile alias on ASM is in the pfile under $ORACLE_HOME/dbs.

The spfile alias on ASM is pointing to the old spfile file, not the new one.

To fix this issue, you need to delete old alias and recreate a new alias


Here are the commands:


cd to alias location, location should be ‘+dg/dbname/spfiledbname.ora’

ls -l  , just to confirm it is an alias

delete alias:

rmalias spfiledbatst.ora


recreate alias

mkalias +DATA01/DBATST/PARAMETERFILE/spfile.535.831221333 spfiledbatst.ora


6)      Bounce the database

Use srvctl utility to bounce the database.

After database comes up successfully, make sure it is using the spfile.

 From a sqlplus session:

      show parameter spfile


7)      Delete old spfile

Just to keep things tidy, remove the old spfile from <+dg>/database name/parameterfile



Hazem Ameen
Senior Oracle DBA



How to Find an Oracle DBA Job in Saudi Arabia

I receive several emails through this blog or LinkedIn asking how to find an Oracle DBA job in Saudi Arabia if I reside in some other country.

Before you starting your job hunting, keep in mind that most companies can survive with local talent which became a lot through the last several years due to Oracle’s popularity. So you must have a unique set of skills required immediately for a company to relocate you.

The best and fastest way is through good old fashioned networking (talking to people that know people). This way you resume will make it directly to department heads and decision makers. In this case, you will have an edge because you are already recommended by your connection and there is a chance the company will not even advertise the job opening until they talk to you first. Even if the company doesn’t have an immediate job opening, they will keep you in mind for future openings.

Another less effective way is responding to ads in newspapers or recruiting websites. The company will be waiting for applicants’ resumes and yours will be one of those many resumes (hope your resume will stand out).

The least effective way is posting your resume on a recruiting website, LinkedIn or applying through a company’s website and waiting for the company to contact you. This rarely produces an immediate response. Your resume will eventually age on recruiting websites and will be bypassed in searches. In case of company’s website, if their inbox is not already full, your resume will rarely makes its way to the IT department.

Hope this helps

How to Select BLOB Chunk Size

Chunk size is parameter you set on a BLOB columns only.  Setting this parameter correctly can improve BLOB reading performance and/or storage utilized by BLOB segment.

Here are some scenarios to help you determine which size is best for you:

If your BLOB size is less than 4K and you are using inline option (which you should always use anyway regardless of your BLOB size), then chunk size parameter doesn’t apply. “LOGSEGMENT” and “LOGINDEX” size will not change because data is saved within data block.

If your BLOB size is bigger than 4k and less or equal 8k then set chunk size to 8k. Best of both worlds, you should get the best performance and space utilization.

If your BLOB size is bigger than 8k and you care about good performance of BLOB (read/write) over space utilization, then set chunk size to 32K.

If your BLOB size is bigger than 8k and you care about space utilization over performance, then set chunk size to 8K.

If your BLOB size is unknown and you only care about performance (read/write), then set chunk size to 32K.

If your BLOB size is unknown  and you only care about saving space, then set chunk size to 8K.


Note: You might think why not aim for speed over storage, after all  storage is cheap. I  agree with this premise, but, if you are storing  TBs of BLOBs like we are, and it is not accessed as much, you actually might start considering  saving storage over performance gains.


These finding were based on tests we ran in a, 64-bit environment.


Reading BLOBs from Database Tests

First test case is reading 35 MB BLOB from the database, notice about  40% increase in speed with 32K block size over 8K block.

Second test is reading 4 MB BLOB from the database, again notice about 60% increase in speed with 32K  block size over 8K block.

Read Performance Tests 32k vs 8k


Chunk Size

Number of I/O Reads

Time in MSEC

35 MB












4 MB










Writing BLOBs to Database Test

Writing test is inserting in the database a 10K BLOB, repeated 10 times. Now notice LOB segment size is about 2.5 time larger with 32K than 8K.

Space Utilization Test 32k vs 8k


Chunk Size

LOB Segment Size

10K, 10 times







Hazem Ameen

Senior Oracle DBA

Alter Table Shrink Taking too Long

When you execute “alter table shrink <segment>” , there is no direct way to tell you how long it will take. A helpful aid to determine when shrinking will finish is to use dbms_space.space_usage package. This function will show you block activity between LHWM and HHWM. I wrote a pipelined function to see output of this function and you can find it here. You’d appreciate this little pipelined function instead of using so many dbms_output.put_line functions.

As of 10g, Oracle introduced LHWM and HHWM.

HHWM is the same as HWM in prior versions which is all blocks above this mark have not been formatted.

LHWM (Low High Water Mark) which all blocks below this mark have been formatted.

Between LHWM and HHWM  there are 5 categories of blocks:

  1. Unformatted
  2. 0 to 25 % empty
  3. 25% to 50% empty
  4. 50% to 75% empty
  5. 75% to 100% empty

You can use dbms_space.space_usage to report on the area between LHWM & HHWM

What to look for?

If you are shrinking a table, you will see the following behavior:

Unformatted Blocks Number of blocks will not change
0 to 25% blocks Number of blocks will decrease until it becomes zero
25% to 50% blocks Number of blocks will decrease until it becomes zero
50% to 75% empty Number of blocks will decrease until it becomes zero
75% to 100% empty Number of blocks will increase
Full Blocks This is number of blocks below LHWM (actual table size) and will increase as the other numbers will decrease

Shrinking index behavior is different.  Full blocks will decrease first, while other block categories will increase,  then Full blocks will increase again.