ORA-00821 on RAC with ASM

 

Database went down and you can’t bring it up because of this unwelcomed error:

ORA-00821: Specified value of sga_target M is too small

 

Further, “startup nomount” still gives the same error and you can’t alter the spfile and adjust the value that caused this violation.

 

Here are the steps I follow to fix this issue, these steps have been tested on Oracle RAC 11.2.0.3 with ASM 64 bit on Redhat Linux AS V. 5.

Summarized steps to recover:

  1. Create pfile from spfile
  2. Edit pfile and adjust value causing this issue
  3. Start the database up with nomount option
  4. Create spfile from pfile
  5. Adjust ASM alias
  6. Bounce the database
  7. Remove old spfile from ASM

 

Steps should be on a single node only.

 

1)      Create pfile from spfile

create pfile=’/u02/backups/dbatst/initdbatst.ora.new’ from spfile=’+data01/dbatst/spfiledbatst.ora’;

 

For pfile, you need to specify a location, otherwise it will overwrite your pfile in $ORACLE_HOME/dbs, which will further complicate your recovery.

 

 For spfile, you need to find where your spfile file is on ASM, remember your database is not even mounted, so Oracle is totally lost as where the location of database’s spfile is.

 

2)      Edit pfile

 Now change value causing violation.

 

3)    Start the database up with nomount option

Startup nomount pfile=’/u02/backups/dbatst/initdbatst.ora.new’

 This should bring up the database in nomount stage.

  

4)      Create spfile from pfile

create spfile=’+DATA01′ from pfile=’/u02/backups/dbatst/initdbatst.ora’;

Notice now that in nomount stage, Oracle recognizes where to put the spfile and you don’t need to specify the full ASM path to spfile.

spfile Location: <+dg>/database name/parameterfile

In my situation it will be ‘+DATA01/dbatst/parameterfile’

Also note that Oracle will create a new spfile without overwriting the old one. You should delete the old spfile as indicated later.

 

5)      Adjust ASM alias

“Alias” in ASM is like link “ln” operating system command.

 

The location of spfile alias on ASM is in the pfile under $ORACLE_HOME/dbs.

The spfile alias on ASM is pointing to the old spfile file, not the new one.

To fix this issue, you need to delete old alias and recreate a new alias

 

Here are the commands:

asmcmd

cd to alias location, location should be ‘+dg/dbname/spfiledbname.ora’

ls -l  , just to confirm it is an alias

delete alias:

rmalias spfiledbatst.ora

 

recreate alias

mkalias +DATA01/DBATST/PARAMETERFILE/spfile.535.831221333 spfiledbatst.ora

  

6)      Bounce the database

Use srvctl utility to bounce the database.

After database comes up successfully, make sure it is using the spfile.

 From a sqlplus session:

      show parameter spfile

 

7)      Delete old spfile

Just to keep things tidy, remove the old spfile from <+dg>/database name/parameterfile

 

 

Hazem Ameen
Senior Oracle DBA

 

 


How to Find an Oracle DBA Job in Saudi Arabia

I receive several emails through this blog or LinkedIn asking how to find an Oracle DBA job in Saudi Arabia if I reside in some other country.

Before you starting your job hunting, keep in mind that most companies can survive with local talent which became a lot through the last several years due to Oracle’s popularity. So you must have a unique set of skills required immediately for a company to relocate you.

The best and fastest way is through good old fashioned networking (talking to people that know people). This way you resume will make it directly to department heads and decision makers. In this case, you will have an edge because you are already recommended by your connection and there is a chance the company will not even advertise the job opening until they talk to you first. Even if the company doesn’t have an immediate job opening, they will keep you in mind for future openings.

Another less effective way is responding to ads in newspapers or recruiting websites. The company will be waiting for applicants’ resumes and yours will be one of those many resumes (hope your resume will stand out).

The least effective way is posting your resume on a recruiting website, LinkedIn or applying through a company’s website and waiting for the company to contact you. This rarely produces an immediate response. Your resume will eventually age on recruiting websites and will be bypassed in searches. In case of company’s website, if their inbox is not already full, your resume will rarely makes its way to the IT department.

Hope this helps


How to Select BLOB Chunk Size

Chunk size is parameter you set on a BLOB columns only.  Setting this parameter correctly can improve BLOB reading performance and/or storage utilized by BLOB segment.

Here are some scenarios to help you determine which size is best for you:

If your BLOB size is less than 4K and you are using inline option (which you should always use anyway regardless of your BLOB size), then chunk size parameter doesn’t apply. “LOGSEGMENT” and “LOGINDEX” size will not change because data is saved within data block.

If your BLOB size is bigger than 4k and less or equal 8k then set chunk size to 8k. Best of both worlds, you should get the best performance and space utilization.

If your BLOB size is bigger than 8k and you care about good performance of BLOB (read/write) over space utilization, then set chunk size to 32K.

If your BLOB size is bigger than 8k and you care about space utilization over performance, then set chunk size to 8K.

If your BLOB size is unknown and you only care about performance (read/write), then set chunk size to 32K.

If your BLOB size is unknown  and you only care about saving space, then set chunk size to 8K.

 

Note: You might think why not aim for speed over storage, after all  storage is cheap. I  agree with this premise, but, if you are storing  TBs of BLOBs like we are, and it is not accessed as much, you actually might start considering  saving storage over performance gains.

 

These finding were based on tests we ran in a 10.2.0.5, 64-bit environment.

 

Reading BLOBs from Database Tests

First test case is reading 35 MB BLOB from the database, notice about  40% increase in speed with 32K block size over 8K block.

Second test is reading 4 MB BLOB from the database, again notice about 60% increase in speed with 32K  block size over 8K block.

Read Performance Tests 32k vs 8k

BLOB Size

Chunk Size

Number of I/O Reads

Time in MSEC

35 MB

32K

1074

300

 

8K

4295

500

 

 

 

 

4 MB

32k

128

0.047

 

8K

509

0.078

 

 

Writing BLOBs to Database Test

Writing test is inserting in the database a 10K BLOB, repeated 10 times. Now notice LOB segment size is about 2.5 time larger with 32K than 8K.

Space Utilization Test 32k vs 8k

BLOB Size

Chunk Size

LOB Segment Size

10K, 10 times

32K

655360

 

8k

262144

 

Hazem Ameen

Senior Oracle DBA


Alter Table Shrink Taking too Long

When you execute “alter table shrink <segment>” , there is no direct way to tell you how long it will take. A helpful aid to determine when shrinking will finish is to use dbms_space.space_usage package. This function will show you block activity between LHWM and HHWM. I wrote a pipelined function to see output of this function and you can find it here. You’d appreciate this little pipelined function instead of using so many dbms_output.put_line functions.

As of 10g, Oracle introduced LHWM and HHWM.

HHWM is the same as HWM in prior versions which is all blocks above this mark have not been formatted.

LHWM (Low High Water Mark) which all blocks below this mark have been formatted.

Between LHWM and HHWM  there are 5 categories of blocks:

  1. Unformatted
  2. 0 to 25 % empty
  3. 25% to 50% empty
  4. 50% to 75% empty
  5. 75% to 100% empty

You can use dbms_space.space_usage to report on the area between LHWM & HHWM

What to look for?

If you are shrinking a table, you will see the following behavior:

Unformatted Blocks Number of blocks will not change
0 to 25% blocks Number of blocks will decrease until it becomes zero
25% to 50% blocks Number of blocks will decrease until it becomes zero
50% to 75% empty Number of blocks will decrease until it becomes zero
75% to 100% empty Number of blocks will increase
Full Blocks This is number of blocks below LHWM (actual table size) and will increase as the other numbers will decrease

Shrinking index behavior is different.  Full blocks will decrease first, while other block categories will increase,  then Full blocks will increase again.




Hazem


Oracle Pipelined Function for DBMS_SPACE.FREE_USAGE

This is a pipelined function to show dbms_space.space_usage output in a table instead of using dbms_output.

dbms_space.space_usage will show you the details of blocks between LHWM (Low High Water Mark) and HHWM (High High Water Mark). Based on the output of this function, you can decide whether to shrink the segment or not.

There are many online resources to explain pipelined functions so I will not do this here.

Here is the code:

-- create object
create type space_usage_object as object 
(
   Segment_owner varchar2(30),
   segment_name varchar2(30),
   segment_type varchar2(30),
   partition_name varchar2(30),
   blocks_unformatted number,
   bytes_unformatted number,
   
   blocks_0_25 number,
   bytes_0_25 number,   

   blocks_25_50 number,
   bytes_25_50 number,   
   
   blocks_50_75 number,
   bytes_50_75 number,   
   
   blocks_75_100 number,
   bytes_75_100 number,            
   
   blocks_full number,
   bytes_full number
);


-- create type
CREATE TYPE space_usage_type AS TABLE OF space_usage_object;   

-- pipelined function
create or replace FUNCTION SPACE_USAGE (
  in_segment_owner        IN  VARCHAR2 ,
  in_segment_name  IN  VARCHAR2 ,
  in_segment_type  IN  VARCHAR2 ,
  in_partition_name       IN  VARCHAR2 := null)
  RETURN space_usage_type PIPELINED AS

 v_unformatted_blocks number;
 v_unformatted_bytes number;
 v_fs1_blocks number;
 v_fs1_bytes number;
 v_fs2_blocks number;
 v_fs2_bytes number;
 v_fs3_blocks number;
 v_fs3_bytes number;
 v_fs4_blocks number;
 v_fs4_bytes number;
 v_full_blocks number;
 v_full_bytes number;
 
 
begin


  dbms_space.space_usage(segment_owner => in_segment_owner,
                        segment_name => in_segment_name,
                        segment_type => in_segment_type,  
                        unformatted_blocks => v_unformatted_blocks, 
                        unformatted_bytes => v_unformatted_bytes,
                        fs1_blocks => v_fs1_blocks, 
                        fs1_bytes => v_fs1_bytes,
                        fs2_blocks => v_fs2_blocks,
                        fs2_bytes => v_fs2_bytes,
                        fs3_blocks => v_fs3_blocks,
                        fs3_bytes => v_fs3_bytes,
                        fs4_blocks => v_fs4_blocks, 
                        fs4_bytes => v_fs4_bytes,
                        full_blocks => v_full_blocks, 
                        full_bytes => v_full_bytes,
                        partition_name =>  in_partition_name);
                        
  pipe row (space_usage_object (in_segment_owner, in_segment_name, in_segment_type, in_partition_name,
                              v_unformatted_blocks , v_unformatted_bytes, 
 v_fs1_blocks, v_fs1_bytes, v_fs2_blocks,  v_fs2_bytes,  v_fs3_blocks, 
 v_fs3_bytes, v_fs4_blocks , v_fs4_bytes , v_full_blocks , v_full_bytes ));

  RETURN ;
END;


-- selecting 
select * from table(space_usage ('OWNER', 'SEGMENT_NAME', 'SEGMENT_TYPE'));


Hazem Ameen
Senior Oracle DBA


Bulk Deletion Performance with Foreign Keys Constraints (on delete cascade)

The purpose of this post is to discourage DBAs from creating “Enabled” foreign keys on tables that are expected to have bulk operations.  I still suggest to create foreign keys but disable them so designers, DBAs and developers will still know which tables are related.

Our test cases were run in our development environment which is 10.2.0.5 RAC sitting on HP Itanium.

The target was to delete a few million rows from 2 tables which were related 1 to many with an enabled foreign key  “on delete cascade”.

The initial plan was simple:

1)      Delete from child table first so we can eliminate lookups from parent to child.

2)      Delete from parent .

3)      Use a nice parallel hint for both steps.

We tested our plan with about 35,000 rows.

First step went fine, but second step the query went for a long time, when we checked, we found that query is waiting for TX Lock even though we executed the query in the same session.

Why wait for TX Lock in the same session?

This is because the first query was using a parallel hint which created parallel sessions.  These parallel sessions locked the rows in the child table. You couldn’t move forward unless we issue a commit.

You can also produce the same behavior in a single instance environment without using any parallelism by doing the following simple steps:

1)      Delete child row

2)      Create a new session and then delete the parent row.

The deletion of the parent row will hang until you commit in the child row.
 

Foreign Keys Performance

The following test cases were executed for about 35,000 rows. Every time we delete, we copy the data back.

We deleted child data first, committed and then deleted from parent table, it took 75 seconds

We deleted directly from parent table, it took 75 seconds (apparently there is no gain from deleting from child table first then deleting from parent table, bummer!)

We disabled foreign keys, delete from child, then delete from parent, total time was 12 seconds only, 6 times as fast.

 

Conclusion

Foreign Keys are excellent for ERD diagrams and remind everybody which tables are related, but physically disable them and you should live happily every after.
 
Hazem Ameen
Senior Oracle DBA


Oracle Date, Timestamp & Time Zone Explained

Back in 9i Oracle introduced a new data type called Timestamp.

Timestamp data type adds two new parts to the regular “Date” data type:

  1. Fraction of a second in microseconds ( 6 digits)
  2. Time Zone (shown only on selected functions like systimestamp and current_timestamp and shown on data types such as “Timestamp with time zone”)

Break down of Timestamp: 2/9/2011 10:39:36.802604 AM +03:00
“2/9/2011 10:39:36″ is exactly like older Date date type.
“802604″ is a fraction of seconds only shows only in timestamp data types.
“+03:00″ is Riyadh Time Zone.

Why do I need a fraction of a second in my date, isn’t seconds enough?

For most applications, up to second precision is enough, but some applications do require microseconds.

Applications like trading, biding such as eBay, and some physics applications were the operations start and finish in microseconds can benefit from timestamp.

What is Time Zone?

The actual definition you find it here http://en.wikipedia.org/wiki/Time_zone

Time Zone are like EST (Eastern Standard Time) or CST (Central Standard Time), etc… is determined by how far you are from Greenwich Line.

Here in Riyadh we are UTC + 3.00

Do I need Time Zone?

Applications that gets deployed across multiple time zones will benefit from this feature greatly. Worldwide internet applications with one central database  or replicated worldwide databases  are some of application that utilize Time Zone. Prior to this Time Zone feature, we had to create a Date column that hold  database server date/time with database server time zone and an additional Date column that holds date/time with client’s time zone. Lots of manual conversion (not to forget Day Light Saving Time!). A time zone mess in short.

Here is a table that shows Timestamp data type compared to the older Date data type.

Date Equivalent Timestamp Explanation
sysdate systimestamp System clock on the database server.
current_date current_timestamp Will show date/time in client time zone. If client’s time zone changes, then output of these functions will change accordingly. “current_timestamp” will show time zone part
localtimestamp Will show timestamp in client’s time zone. If client’s time zone changes, then output of this function will change accordingly. This function will NOT show time zone as part of the output.

Data Types

Date Timestamp System clock on the database server.
No Equivalent Timestamp with Time Zone Saves timestamp in client time zone. If client time zone changes, the saved value will NOT be converted to new time zone. This data type will show time zone as part of the output.
No Equivalent Timestamp with Local Time Zone Saves timestamp in client time zone. When selected back, the value shown to user will be converted to client’s current time zone. That means, if client’s time zone changes, then value shown to user will be converted to new time zone. This data type will NOT show time zone as part of the output.

 
How can I know my database time zone?
SELECT dbtimezone FROM DUAL;

 
How to know client’s time zone?
SELECT sessiontimezone FROM DUAL;

 
How to change database time zone?
ALTER DATABASE SET TIME_ZONE = ‘+03:00′;

Then restart the database.
 
How to change client time zone?
ALTER DATABASE SET TIME_ZONE=’+03:00′;

OR by using a named region
ALTER DATABASE SET TIME_ZONE=’Asia/Riyadh’;
 

This later “alter” will change current_timestamp  output to :
13-FEB-11 11.54.19.985565000 AM ASIA/RIYADH

OR by using environment variables

export ORA_SDTZ=’+05:00′

 
How to initially set up your database time zone?
CREATE DATABASE testdb
. . .
. . .
SET TIME_ZONE=’+03:00′;

 
Where can I find a list of all time zones?
Look up this view: V$TIMEZONE_NAMES

Hope this helps.
 
 
Hazem Ameen
Senior Oracle DBA


Follow

Get every new post delivered to your Inbox.