Arguments Against Big Datafiles

I was running a health check on 10gR2 database and came across a tablespace with a single data file sized at 50 GB. We normally select datafile standard size between 2-6 GB range depending on how big the total database size is expected to be.

Here are some points why:

  1. The bigger the file, the harder it gets to tune I/O contention issues especially when your disks are not stripped. In this case, you will have to move high I/O tables to a different tablespace (possible downtime because of unusable indexes).
  2. ASM sometimes do I/O rebalancing among disks. Imagine doing 50 GB copy from one disk to another and the impact on your system.
  3. Recovering big datafiles is definitely another concern because of the prolonged downtime.
  4. Running dbv (dbverify) will take much longer unless you try to run it in parallel.

Hopefully this will slow you down from using “create bigfile tablespace”

Hazem Ameen
Senior Oracle DBA

Advertisements

One thought on “Arguments Against Big Datafiles

  1. Hi,

    I wanted to add one more points to Hazem’s note on BFT.

    Imagine a scenario where the BFT created on ASM disks which has grown to brink of available disk space, it will not be feasible to add more datafiles. But in case SMALL TBS, we can add datafile which points to any available disk groups. Of course, I am not againt the idea of using BFT, but DBA taking decision of using such features on production has to consider some of such practical issues. When the size increases and will require more time during any maintenance/breakdown.

    Rgds..PM

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s