The core architecture of the scheduler remains the same in 12c with some enhancements and few exceptions in case of the multitenant environment.
The below additional columns are added to the *__SCHEDULER_JOB_RUN_DETAILS. Out of these, ERRORS and OUTPUT will be very helpful. ERRORS column shows the error messages related to the job run and the OUTPUT column shows the log of activity during the job run. Earlier, we used to use DBMS_SCHEDULER.GET_FILE to get the stdout or stderr from the external jobs but now, we can get that output directly through column OUTPUT.
BINARY_ERRORS BINARY_OUTPUT ERRORS OUTPUT
We can now use couple of script jobs to run our custom scripts. These script jobs use different interpreters to run the scheduler jobs. The below are the different types of script jobs available.
- SQL script jobs – This uses the sqlplus interpreter to run jobs. So, we can use all the features available in sqlplus including formatting.
- External script jobs – This uses shell interpreter incase of UNIX and command prompt incase of windows.
- Backup script jobs – This uses the RMAN interpreter to run the jobs.
For example, earlier, to run a sqlplus script, we need to go to UNIX shell(cmd in windows) and invoke the sqlplus to run sqlplus scripts. However, with SQL Script jobs, we can just specify job_type as SQL_SCRIPT and the sqlplus would automatically be available and we can run the sqlplus commands directly inline by specifying them in JOB_ACTION. The same applies to external and backup job scripts.
We now have to use the DBMS_CREDENTIAL package instead of sub-programs in DBMS_SCHEDULER to control the credentials. The core functionality of credentials stands the same.
Specific to Multitenant
Job Co-ordinator and Slaves:
1. The job_queue_processes parameter can only be set at the CDB level and the job co-ordinator works only at CDB level.
The job slave processes are invoked by the coordinator process similar to non-CDBs(versions prior to 12c). The coordinator process monitors the CDB and PDBs and runs the jobs based on the priorities. If a job from PDB has to be executed, it spawns a job slave to execute that job and the slave will eventually switch into that PDB(the control of slave will be with PDB now).
2. Unlike non-cdb, where we can have only one window open at a time, in this new architecture, we can have more than two windows open at any time. One similar to the traditional window(which allocate resources to consumer groups and run) that operates at the PDB level and the other one is at the CDB level, that is used to allocates resources among different PDBs. So, one window at CDB level and one window for each PDB can be online at any time.
3. The global scheduler attributes when set are particular to that PDB or CDB only. For example, if LOGGING attribute is set at CDB level, it is applicable to jobs running at CDB level only and if this attribute is set at PDB level, it is applicable to jobs running at PDB level only. A job will not failover to other PDBs or to CDB incase the PDB in which its running is crashed or closed. However, in case of RAC, they may failover to the surviving node of the PDB.
Modifications to Dictionary views:
All the V$ and GV$ views have a new column called CON_ID. There are CDB_* views corresponding to all Scheduler DBA_* views. When we query these CDB_* views from within the PDB, they show the data similar to DBA_* views. However, from CDB level we can view the data related to all the PDBs.