Question: How to check whether the datafiles are set to AUTOEXTEND and UNLIMITED?
We can query the DBA_DATA_FILES view to check whether the data file is in AUTOEXTEND mode.
SQL> select tablespace_name, file_name, autoextensible from dba_data_files;
To confirm whether the maxsize is set to UNLIMITED we have to check for the maximum possible size, as there is no concept of an ‘UNLIMITED’ file size. Oracle limits the number of blocks in a datafile to approximately 4 million blocks (unless BIGFILE tablespace is used, which is a different topic). Hence, the limit, in fact, depends on DB_BLOCK_SIZE. If the database/tablespace has a 2k block size, the max size for a file is approximately 8GB. With a 4k block, maxfile size is 16GB, 8k block, 32GB, etc. This limit in the number of blocks in a file applies to all platforms. In short, we have to query the ‘maxbytes‘ column and see whether it is the maximum possible value or manually limited to a smaller value.
Performance impact when Tablespace Autoextend is enabled
The performance impact would be depending on the moment of the day (number of current transactions, IO activity in the storage system, etc.) and the simple answer is ‘yes, auto-extending a datafile could impact concurrent transactional activity’. The real answer is a lot more complex.
The extent to which extending a datafile might cause concurrency issues depends upon:
- How long it takes to extend the datafile
- What transactions are occurring at the same time as (1)
When a datafile is extended, the session performing the operation will acquire a number of locks. Generally these locks will not affect concurrent transactions, but some locks may do. For example, we have to update the controlfile with the updated datafile information. This will involve acquiring the CF (controlfile) enqueue in exclusive mode. If for some reason this enqueue is held for too long, it could possibly cause checkpoints to hang (by blocking the CKPT) process. This may then block sessions doing DML because we may be unable to write redo to the online log files. Note that this is not an usual scenario, but it could possibly happen.
It is not possible to list every potential concurrency blocking scenario. If such a hang did occur we would need to have the customer dump HANGANALYZE dumps to investigate the reason.
The best way to minimize the risk of concurrency issues would be to:
- Size datafiles correctly to minimise auto-extension activity
- Monitor datafiles’ free space and manually resize files before auto-extension is required
- Do manual resizes during off-peak periods
With Exadata there is the fast file creation. This allows file the datablock initialisation during creation/extension to be offloaded to the cells. It is further optimized if write back flash cache is enabled because blocks are written to flash.