优化1:
根据SQL逻辑,增加过滤条件object_id!=1,原语句逻辑不变。
会话1:
#会话1执行优化1的更新语句
update tb_test_log set object_id=1 where owner='SYS' and object_id!=1;
commit;
会话2:
#会话1此次执行更新语句后,redo size产生827112,session logical reads消耗22835
/*
NAME BEGIN_VALUE END_VALUE DIFF
redo size 655153236 655980348 827112
session logical reads 4030918 4053753 22835
*/
会话1:
#会话1执行优化1的更新语句
update tb_test_log set object_id=1 where owner='SYS' and object_id!=1;
commit;
会话2:
#会话1此次执行更新语句后,redo size产生340,session logical reads消耗12413
/*
NAME BEGIN_VALUE END_VALUE DIFF
redo size 655980348 655980688 340
session logical reads 4053753 4066166 12413
*/
会话1:
#会话1执行优化1的更新语句
update tb_test_log set object_id=1 where owner='SYS' and object_id!=1;
commit;
会话2:
#会话1此次执行更新语句后,redo size产生340,session logical reads消耗12413
/*
NAME BEGIN_VALUE END_VALUE DIFF
redo size 655980688 655981028 340
session logical reads 4066166 4078579 12413
*/
小结:优化1,每次基本上不更新表中数据,重做日志产生量大概是300+,逻辑读大概是10000+。
优化2:
根据SQL逻辑,增加过滤条件decode(object_id,1,null,'1')='1',并增加索引tb_test_log(owner,decode(object_id,1,null,'1')),原语句逻辑不变。
会话3:
#新建索引
create index idx_tb_test_log_01 on tb_test_log(owner,decode(object_id,1,null,'1')) tablespace users;
会话1:
#会话1执行优化2的更新语句
update tb_test_log set object_id=1 where owner='SYS' and decode(object_id,1,null,'1')='1';
commit;
会话2:
#会话1此次执行更新语句后,redo size产生384,session logical reads消耗11214
/*
NAME BEGIN_VALUE END_VALUE DIFF
redo size 655981028 655981412 384
session logical reads 4078579 4089793 11214
*/
会话1:
#会话1执行优化2的更新语句
update tb_test_log set object_id=1 where owner='SYS' and decode(object_id,1,null,'1')='1';
commit;
会话2:
#会话1此次执行更新语句后,redo size产生384,session logical reads消耗6
/*
NAME BEGIN_VALUE END_VALUE DIFF
redo size 655981412 655981796 384
session logical reads 4089793 4089799 6
*/
会话1:
#会话1执行优化2的更新语句
update tb_test_log set object_id=1 where owner='SYS' and decode(object_id,1,null,'1')='1';
commit;
会话2:
#会话1此次执行更新语句后,redo size产生384,session logical reads消耗5
/*
NAME BEGIN_VALUE END_VALUE DIFF
redo size 655981796 655982180 384
session logical reads 4089799 4089804 5
*/
小结:优化2,每次基本上不更新表中数据,重做日志产生量大概是300+,逻辑读大概是5+。
总结:
1.根据SQL逻辑,增加过滤条件object_id!=1,原语句逻辑不变,大幅度降低了重做日志的产生量。
2.根据SQL逻辑,增加过滤条件decode(object_id,1,null,'1')='1',并增加索引tb_test_log(owner,decode(object_id,1,null,'1')),原语句逻辑不变,大幅度降低了重做日志的产生量和逻辑读。
3.类似问题的DELETE语句也可以从此方法中受益。