CHUNK_ID TASK_NAME STATUS START_ROWID END_ROWID
---------- -------------------- -------------------- ------------------ ------------------
1 Task 1: By Rowid PROCESSED AAATLKAAHAAAACAAAA AAATLKAAHAAAACxCcP
2 Task 1: By Rowid PROCESSED AAATLKAAHAAAACyAAA AAATLKAAHAAAADjCcP
3 Task 1: By Rowid PROCESSED AAATLKAAHAAAADkAAA AAATLKAAHAAAAD/CcP
4 Task 1: By Rowid PROCESSED AAATLKAAHAAAAEAAAA AAATLKAAHAAAAExCcP
5 Task 1: By Rowid PROCESSED AAATLKAAHAAAAEyAAA AAATLKAAHAAAAFjCcP
6 Task 1: By Rowid PROCESSED AAATLKAAHAAAAFkAAA AAATLKAAHAAAAF/CcP
7 Task 1: By Rowid PROCESSED AAATLKAAHAAAAGAAAA AAATLKAAHAAAAGxCcP
8 Task 1: By Rowid PROCESSED AAATLKAAHAAAAGyAAA AAATLKAAHAAAAHjCcP
9 Task 1: By Rowid PROCESSED AAATLKAAHAAAAHkAAA AAATLKAAHAAAAH/CcP
9 rows selected
作为user_parallel_execute_chunks,一个很重要的字段就是status状态列,用于标注每个chunk的处理情况。我们可以依据这个字段来判断任务完成情况。
SQL> select status, count(*) from user_parallel_execute_chunks group by status;
STATUS COUNT(*)
-------------------- ----------
ASSIGNED 2
UNASSIGNED 5507
PROCESSED 938
(过一会之后…….)
SQL> select status, count(*) from user_parallel_execute_chunks group by status;
STATUS COUNT(*)
-------------------- ----------
ASSIGNED 2
UNASSIGNED 5441
PROCESSED 1004
从status字段,我们可以分析出并行作业工作的原理。每一个chunk记录在划分之后,都是设置为unassiged状态,包括起始和终止的id信息(rowid或者column_range)。每次处理的chunk是assigned状态,实验程序中我们设置parallel_level为2,所以每次都是2个chunk是assigned状态。处理结束之后,设置为processed状态。
海量数据更新最大的问题在于undo拓展的量,我们检查一下执行过程中的undo size情况。
SQL> select sum(bytes)/1024/1024 from dba_undo_extents where status='ACTIVE';
SUM(BYTES)/1024/1024
--------------------
SQL> select sum(bytes)/1024/1024 from dba_undo_extents where status='ACTIVE';
SUM(BYTES)/1024/1024
--------------------
16
SQL> select sum(bytes)/1024/1024 from dba_undo_extents where status='ACTIVE';
SUM(BYTES)/1024/1024
--------------------
10
每次的数据量都不大,说明每次都是一小块chunk的操作。也确定使用parallel执行的过程,是分步小块commit的过程。在job视图中,我们也可以明确的看出作为作业的信息。
SQL> select owner, job_name, JOB_ACTION, SCHEDULE_TYPE, state, last_start_date from dba_scheduler_jobs where job_name like 'TASK$_4%';
OWNER JOB_NAME JOB_ACTION SCHEDULE_TYPE STATE LAST_START_DATE
------- ----------- ------------------------------------------ ------------- --------------- -----------------------------------
SYS TASK$_4_2 DBMS_PARALLEL_EXECUTE.RUN_INTERNAL_WORKER IMMEDIATE RUNNING 10-2月 -14 01.48.34.947417 下午 PRC
SYS TASK$_4_1 DBMS_PARALLEL_EXECUTE.RUN_INTERNAL_WORKER IMMEDIATE RUNNING 10-2月 -14 01.48.34.730487 下午 PRC
注意:传统的并行进程v$px_process中没有看到数据信息,说明并行程序包并不是Oracle传统的数据库并行方案。
SQL> select * from v$px_process;
SERVER_NAME STATUS PID SPID SID SERIAL#
----------- --------- ---------- ------------------------ ---------- ----------
执行结束信息:
25 --Controller
26 n_try := 0;
27 n_status := dbms_parallel_execute.task_status(task_name => vc_task);
28 while (n_try<2 and n_status != dbms_parallel_execute.FINISHED) loop
29 dbms_parallel_execute.resume_task(task_name => vc_task);
30 n_status := dbms_parallel_execute.task_status(task_name => vc_task);
31 end loop;
32
33 --Deal with Result
34 dbms_parallel_execute.drop_task(task_name => vc_task);
35 end;
36 /
PL/SQL procedure successfully completed
Executed in 944.453 seconds
更新2G数据一共使用945s,合计约16分钟。
从上面的数据视图和调用过程,我们可以得到如下结论: