I was running a health check on 10gR2 database and came across a tablespace with a single data file sized at 50 GB. We normally select datafile standard size between 2-6 GB range depending on how big the total database size is expected to be.
Here are some points why:
- The bigger the file, the harder it gets to tune I/O contention issues especially when your disks are not stripped. In this case, you will have to move high I/O tables to a different tablespace (possible downtime because of unusable indexes).
- ASM sometimes do I/O rebalancing among disks. Imagine doing 50 GB copy from one disk to another and the impact on your system.
- Recovering big datafiles is definitely another concern because of the prolonged downtime.
- Running dbv (dbverify) will take much longer unless you try to run it in parallel.
Hopefully this will slow you down from using “create bigfile tablespace”
Senior Oracle DBA