Data Pump & Arabic Character-set Conversion

Those of you who used exp/imp know that you had to set the NLS_LANG in the session to obtain a correct export, otherwise it would export in default character-set which is US7ASCII. If you’re importing into a different database with a different character-set, you would have to set NLS_LANG again to the new character-set.

Now with Data Pump, setting NLS_LANG prior to exporting/importing is no longer required. Data Pump would do the conversion for you without any settings from your side.

Here is a quick test we ran:

DB1 character-set: AR8ISO8859P6

DB2 character-set: AL32UTF8

Session NLS_LANG: American_America.WE8ISO8859P1

We were able to export/import tables with Arabic data from/to DB1 & DB2 with no issues.

 

Hazem Ameen
Senior Oracle DBA

Advertisements

SQL*Net More Data to Client & SDU

In one of our projects, we had an operation that moves large amounts of data through a db link. This operation was taking much longer than expected, a SQL trace indicated excessive number of SQL*Net more data to client waits. This led us to perform an extensive investigation of all layers between the 2 databases: OS parameters, network, firewall/DMZ which all came back OK.

In our development environment, we decided to increase SDU parameter to 32K and see the outcome of such modification. Prior to this test, our impression of SDU parameter was a very little improvement in performance can be achieved by increasing this value and not worth fiddling with.

Test Scenario Setup:

  • Create a db link pointing back to same database so we eliminate any network traffic overhead.
  • Run 2 tests; one with default values and the second test with SDU value set at 32767 at both the TNS and listener.
  • Create an additional listener with a different name and listening on any port outside 1521. This along with local_listener parameter left at default will disable dynamic registration with the new listener. Dynamic registration overwrites SDU parameter according to Metalink article 124802.1 SDU Value not Working with Dynamic Registration. When creating an additional listener use netca not netmgr. We couldn’t logon with listeners created with netmgr. We got an error ORA-01034 Oracle not available.
  • Create an additional TNS entry with new SDU value
  • Enable SQLNET trace to verify SDU parameter is set correctly.
  • Trace remote sessions with dbms_monitor.enable_session_trace.

Test 1 using default values

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.01       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch      353     30.87      98.88      36206      53047          0    11503946
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total      355     30.88      98.88      36206      53047          0    11503946

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 59

Rows     Row Source Operation
-------  ---------------------------------------------------
11503946  TABLE ACCESS FULL TABLE1 (cr=53047 pr=36206 pw=0 time=11504289 us)

Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  SQL*Net message to client                     353        0.00          0.00
  SQL*Net message from client                   353        0.32         38.37
  SQL*Net more data to client                 38967        0.16         68.63
  db file scattered read                       2274        0.04          4.42

Test 2 using SDU at 32K

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.01       0.00          0          0          0           0
Fetch      353     27.33      59.64      36872      53047          0    11503946
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total      355     27.34      59.65      36872      53047          0    11503946

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 61

Rows     Row Source Operation
-------  ---------------------------------------------------
11503946  TABLE ACCESS FULL TABLE1 (cr=53047 pr=36872 pw=0 time=11504274 us)

Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  SQL*Net message to client                     353        0.00          0.00
  SQL*Net message from client                   353        0.30         69.02
  SQL*Net more data to client                  2223        0.24         29.76
  db file scattered read                       2316        0.12          6.87

In 2nd test with SDU set at 32K, we’ve seen SQL*NET more data to client total wait time dropped down more than half, in other tests (not published here), we seen 10% improvement in elapsed time. A very welcomed improvement.

How to Put in Production

We don’t totally understand the exact ramification of increasing this value on a production system, so to play it safe, we decided to create a new listener at the remote site and a new TNS entry local site as indicated earlier in the test setup. The db link would use both new TNS entry pointing to the new listener at the remote site.

Conclusion

Projects that move large amounts of data across a TCP link such as Oracle Streams, materialized view over a db link, etc… can see some benefits by increasing SDU value.

Hazem Ameen
Senior Oracle DBA

Oracle Streams: Recommendations for Heartbeat table

Here are some simple recommendations we thought be useful when implementing a streams heartbeat table in a bi-directional environment:

  • Oracle suggests adding a row every 1 minute to the heartbeat table. We recommend scheduling the job every 10-15 seconds which improves your probability of catching intermittent problems like buggy firewall or a faulty network cards.
  • Partition the table for easier maintenance. Dropping a partition is easier than deleting thousands of rows. Remember to create a job that adds new partitions.
  • Implement 1 table only in your streams environment, checking 1 table is easier than checking multiple tables.
  • Generate the primary key based on date easier than creating a sequence. Add site name or abbreviation to make the primary key unique globally.

Here is an example:

create table streams_heartbeat
(
id varchar2(20),
hb_time date,
constraint streams_heartbeat_pk primary key (id) using index tablespace users
) pctfree 0
partition by range(hb_time)
(
partition p_20080628 values less than (to_date(‘2008-06-29 00:00:00’, ‘YYYY-MM-DD HH24:MI:SS’)),
partition p_20080629 values less than (to_date(‘2008-06-30 00:00:00’, ‘YYYY-MM-DD HH24:MI:SS’)),
partition p_20080630 values less than (to_date(‘2008-07-01 00:00:00’, ‘YYYY-MM-DD HH24:MI:SS’)),
partition p_20080701 values less than (to_date(‘2008-07-02 00:00:00’, ‘YYYY-MM-DD HH24:MI:SS’))
);

Site A:

begin
dbms_scheduler.create_job(
job_name => ‘streams_heartbeat_job’,
job_type => ‘PLSQL_BLOCK’,
job_action => ‘insert into streams_heartbeat (id, hb_time) values (”A-” || to_char(sysdate, ”YYYYMMDDHH24MISS”), sysdate);’,
start_date => SYSTIMESTAMP,
repeat_interval => ‘freq=secondly; interval=15’,
enabled => TRUE,
comments => ‘Streams heartbeat job from site A’);
end;
/

Site B:

begin
dbms_scheduler.create_job(
job_name => ‘streams_heartbeat_job’,
job_type => ‘PLSQL_BLOCK’,
job_action => ‘insert into streams_heartbeat (id, hb_time) values (”B-” || to_char(sysdate, ”YYYYMMDDHH24MISS”), sysdate);’,
start_date => SYSTIMESTAMP,
repeat_interval => ‘freq=secondly; interval=15’,
enabled => TRUE,
comments => ‘Streams heartbeat job from site B’);
end;
/

Hazem Ameen
Senior Oracle DBA