首页 » ORACLE 9i-23c » 案例: checkpoint hang等待log file switch(checkpoint incomplete) 11.2.0.4 on Solaris OS 5.11

案例: checkpoint hang等待log file switch(checkpoint incomplete) 11.2.0.4 on Solaris OS 5.11

前段时间遇到的一个案例,朋友开始给我说是数据库连接数很高,让我帮分析一下。这是一套 11.2.0.4 2nodes RAC on Sun OS 5.11环境, 问题发现已经过去几个小时, 等我到时客户反应现在连接数已经平稳,刚才看到的大多数会话等待enq: sq – contention.  相信根据这点信息,不知你是否觉的这就是一个高并发的sequence cache不足问题?当然没有这么简单。不过还了解到节点1 有问题,节点2正常。

常常我们会问题现象所迷惑,如果这已经是个病入膏肓的人,当然病因不会只是当前症状, 现在所见多数是果。先说说什么情况下会enq:sq – contention.

1.  什么是ENQ:SQ -CONTENTION

ENQ 是数据库一种队列机制,SQ 是SeQuence相关的队列, 当进程从内存中申请SEQUENCE的nextval时,如果发现内存已用完,需要从磁盘的字典表seq$ 读一批到内存的sequence cache. 当填充或初始化SEQUENCE CACHE序列缓存时需要申请该SQ Enqueue, 同时更新SEQ$基表, 这也就是为什么当时看到的SQL 是update seq$的内部递归SQL.

产生enq:sq -contention的原因有:
sequence cache过小,应用使用频繁导致Seq 对象需要频繁填充cache;
RAC 上的order sequence;
shared pool 过小导致sequence cache被刷出shared pool;
申请seq enq时间过长被其它进程堵塞等。
排查seq: sq- contention的sequence的具体对象可以从v$session 该事件的P2值得到。 P2值就是sequence 的object_id .

wait event: 'enq: SQ - contention'
p1: 'name|mode'=0x53510006
p2: 'object #'=0x16a
p3: '0'=0x0

SQL> @hex 16a
                                DEC                  HEX
----------------------------------- --------------------
                         362.000000                  16A
SQL> select owner,object_name from dba_objects where object_id=362;
OWNER OBJECT_NAME
----- -----------
SYS   AUDSES$

Note:
可以看出当时应用是在以排它方式申请sys.audses$ 序列时遇到堵塞。 该序列是会话登录时在没有禁用审计时调用。11.2的审计默认是启用的。当然会想是不是这个sequence过小,又遇到LOGON STORM呢?不是的,这个可以从listener log确认。最快的方法是找一下blocker chains (堵塞链条)。

2. 查找阻塞链条

SQL> select event,BLOCKING_SESSION,BLOCKING_INSTANCE,count(*)
 from v$session 
 where status<>'INVALID' 
 group by event,BLOCKING_SESSION,BLOCKING_INSTANCE;

EVENT                          BLOCKING_SESSION BLOCKING_INSTANCE        COUNT(*)
------------------------------ ---------------- ----------------- ---------------
VKTM Logical Idle Wait                                                          1
gc buffer busy release                      608                 1               1
Streams AQ: waiting for time m                                                  1
anagement or cleanup tasks
Space Manager: slave idle wait                                                  1
write complete waits                        871                 1               1
rdbms ipc message                                                              54
SQL*Net message to client                                                       1
enq: SQ - contention                        608                 1             605
PING                                                                            1
buffer busy waits                          4685                 1               1
SQL*Net message from client                                                     4
enq: SQ - contention                       7382                 1               7
DIAG idle wait                                                                  2
log file switch (checkpoint in             1771                 1               5
complete)
...
GCR sleep                                                                       1
VKRM Idle                                                                       1
enq: SQ - contention                                                            1
buffer busy waits                          7298                 1               2

26 rows selected.

SQL> select BLOCKING_SESSION,BLOCKING_INSTANCE from v$session where sid=608;
BLOCKING_SESSION BLOCKING_INSTANCE
---------------- -----------------
            1771                 1

SQL> select BLOCKING_SESSION,BLOCKING_INSTANCE from v$session where sid=1771;
BLOCKING_SESSION BLOCKING_INSTANCE
---------------- -----------------

Note:
block chaints:   enq:SQ – contention (N个)>>> enq:SQ – contention(sid=608)>>> log file switch (checkpoint incomplete)(sid=1711 LGWR),
大部分enq:SQ – contention(600多个)会话是被sid: 608堵塞 ,608又是被sid:1771 堵塞, sid:1771是后台进程LGWR 日志写进程, 等待事件是log file switch (checkpoint incomplete)。

3. 什么是log file switch (checkpoint incomplete)

这个等待事件几年前log file switch (checkpoint incomplete) event 简单总结过, 当Oracle会话更改buffer cache中的Oracle buffer block时,它们的进程将足够的redo 信息复制到redo buffer中。 出于多种原因(3s\commit\…),LGWR会将其redo 从redo buffer写入到联机重做日志文件中。 在线重做日志文件是串联连接写入一个接下一个创建一个闭环。 当LGWR写入在线重做日志时,如果当前日志文件写满以后,LGWR必须切换到下一个联机重做日志,然后开始写入。 切换log file switch等待事件“log file switch (checkpoint incomplete)”。

Oracle 必须等到 checkpoint position  已经写入 在线日志文件. 当数据库等待 checkpoints时, redo产生将会停止,直到log switch完成. 此刻Oracle所有会话会 被等待, 直到checkpoint  完成. 这也是为什么当前会话稳定的假象,因为非sysdba用户已经无法登录。 当checkpoint事件发生时DBWn会将脏块写入到磁盘中,同时数据文件和控制文件的文件头也会被更新以记录checkpoint信息。  When a checkpoint occurs, Oracle must update the headers of all datafiles to record the details of the checkpoint. This is done by the CKPT process. The CKPT process does not write blocks to disk, DBWn (DB Writer Process) always performs that task.  更多信息参考Master Note: Overview of Database Checkpoints (文档 ID 1490838.1)

不能日志切换了,那下一步可以检查:是否为归档模式? 归档路径,空间使用率,日志当前状态,日志大小,日志切换频率,IO情况…

SQL> select group#,thread#,SEQUENCE#  ,STATUS,FIRST_CHANGE# from v$log;

         GROUP#         THREAD#       SEQUENCE# STATUS             FIRST_CHANGE#
--------------- --------------- --------------- ---------------- ---------------
              1               1           10821 ACTIVE               29794542995
              2               1           10822 ACTIVE               29795882841
              3               1           10823 ACTIVE               29797766437
              4               1           10824 CURRENT              29798824118
              5               1           10820 ACTIVE               29792473826
             11               2           13634 INACTIVE             29794540448
             12               2           13635 INACTIVE             29796006289
             13               2           13636 INACTIVE             29798244262
             14               2           13637 INACTIVE             29799746620
             15               2           13638 CURRENT              29801862030

set lines 120; 
set pages 999; 
SELECT 
to_char(first_time,'YYYY-MON-DD') day,
to_char(sum(decode(to_char(first_time,'HH24'),'00',1,0)),'99') "00",
to_char(sum(decode(to_char(first_time,'HH24'),'01',1,0)),'99') "01",
to_char(sum(decode(to_char(first_time,'HH24'),'02',1,0)),'99') "02",
to_char(sum(decode(to_char(first_time,'HH24'),'03',1,0)),'99') "03",
to_char(sum(decode(to_char(first_time,'HH24'),'04',1,0)),'99') "04",
to_char(sum(decode(to_char(first_time,'HH24'),'05',1,0)),'99') "05",
to_char(sum(decode(to_char(first_time,'HH24'),'06',1,0)),'99') "06",
to_char(sum(decode(to_char(first_time,'HH24'),'07',1,0)),'99') "07",
to_char(sum(decode(to_char(first_time,'HH24'),'08',1,0)),'99') "0",
to_char(sum(decode(to_char(first_time,'HH24'),'09',1,0)),'99') "09",
to_char(sum(decode(to_char(first_time,'HH24'),'10',1,0)),'99') "10",
to_char(sum(decode(to_char(first_time,'HH24'),'11',1,0)),'99') "11",
to_char(sum(decode(to_char(first_time,'HH24'),'12',1,0)),'99') "12",
to_char(sum(decode(to_char(first_time,'HH24'),'13',1,0)),'99') "13",
to_char(sum(decode(to_char(first_time,'HH24'),'14',1,0)),'99') "14",
to_char(sum(decode(to_char(first_time,'HH24'),'15',1,0)),'99') "15",
to_char(sum(decode(to_char(first_time,'HH24'),'16',1,0)),'99') "16",
to_char(sum(decode(to_char(first_time,'HH24'),'17',1,0)),'99') "17",
to_char(sum(decode(to_char(first_time,'HH24'),'18',1,0)),'99') "18",
to_char(sum(decode(to_char(first_time,'HH24'),'19',1,0)),'99') "19",
to_char(sum(decode(to_char(first_time,'HH24'),'20',1,0)),'99') "20",
to_char(sum(decode(to_char(first_time,'HH24'),'21',1,0)),'99') "21",
to_char(sum(decode(to_char(first_time,'HH24'),'22',1,0)),'99') "22",
to_char(sum(decode(to_char(first_time,'HH24'),'23',1,0)),'99') "23"
from
v$log_history
WHERE first_time>SYSDATE-7
GROUP by 
to_char(first_time,'YYYY-MON-DD');
       DAY      00  01  02  03  04  05  06  07  08  09  10  11  12  13  14  15  16  17  18  19  20  21  22  23
       -----    --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- -- --- --- --- --- ---  
2018-4月 -15	  0	  0	  0	  0	  0	  0	  0	  0	  0	  0	  0	  0	  0	  1	  1	  2	  2	  1	  2	  1	  1	  1	  1	  1
2018-4月 -16	  1	  4	  1	  2	  0	  2	  2	  1	  2	  3	  4	  4	  3	  3	  2	  4	  3	  2	  3	  1	  1	  1	  2	  0
2018-4月 -17	  2	  3	  2	  0	  2	  1	  2	  1	  2	  3	  4	  4	  2	  3	  2	  3	  4	  2	  2	  2	  1	  1	  1	  1
2018-4月 -18	  1	  4	  1	  2	  0	  2	  2	  4	  3	  3	  4	  3	  2	  1	  4	  3	  3	  2	  2	  1	  2	  0	  2	  1
2018-4月 -19	  1	  5	  1	  2	  0	  2	  1	  3	  3	  4	  3	  4	  2	  2	  3	  3	  3	  3	  2	  1	  2	  0	  2	  0
2018-4月 -20	  2	  5	  1	  1	  1	  2	  2	  2	  3	  3	  4	  3	  4	  2	  2	  4	  3	  3	  2	  0	  2	  2	  0	  2
2018-4月 -21	  0	  2	  2	  2	  1	  1	  3	  0	  0	  1	  0	  0	  0	  0	  1	  0	  0	  0	  0	  3	  2	 28	  2	  2
2018-4月 -22	  2	  2	  2	  1	  1	  3	  3	  2	  3	  3	  2	  1	  0	  0	  0	  0	  0	  0	  0	  0	  0	  0	  0	  0

SQL> SELECT EVENT,BLOCKING_SESSION,STATUS FROM V$SESSION WHERE PADDR IN(select ADDR from v$process WHERE PNAME LIKE 'CKP%')
  2  ;

EVENT                                                            BLOCKING_SESSION STATUS
---------------------------------------------------------------- ---------------- --------
rdbms ipc message                                                                 ACTIVE
SQL>SELECT EVENT,BLOCKING_SESSION,STATUS FROM V$SESSION WHERE PADDR IN(select ADDR from v$process WHERE PNAME LIKE 'DBW%')
EVENT                                                            BLOCKING_SESSION STATUS
---------------------------------------------------------------- ---------------- --------
rdbms ipc message                                                                 ACTIVE
rdbms ipc message                                                                 ACTIVE
rdbms ipc message                                                                 ACTIVE
rdbms ipc message                                                                 ACTIVE
rdbms ipc message                                                                 ACTIVE
rdbms ipc message                                                                 ACTIVE
rdbms ipc message                                                                 ACTIVE
...
rdbms ipc message                                                                 ACTIVE
rdbms ipc message                                                                 ACTIVE
rdbms ipc message                                                                 ACTIVE

32 rows selected.

Note:

当前的日志thread 1,也就是实例1的所有log file都是活动状态,无法覆盖reuse,  根据日志组切换频率判断大小和online redo log组数不是瓶颈。手动发起检查点hang 半小时不动,快速恢复手段也是先加上几组日志确保日志可以生成,让当前实例恢复业务,再分析原因。

4. 临时解决办法

SQL> alter database add logfile thread 1 '+DATA' size 2g;

Database altered.

给thread 1(实例1) 临时增加了几组日志后,实例已经可以连接,并且业务恢复正常。

5. 日志收集和hanganalyze 和 systemstate dump,short_stack

OS 无报错,CPU,内存资源空闲, (使用了多路径,也曾怀疑是否有个别存储链路的有IO挂起,但主机无法确认。)
alert log文件无错误日志, 只是曾经有ORA-20的错误; AWR 几个小时已无数据收集; 查看LGWR, DBWR,CKPT Trace 文件;

SQL> oradebug setmypid
Statement processed.
SQL> oradebug unlimit
Statement processed.
SQL> oradebug dump hanganalyze 3;
Statement processed.
SQL> oradebug dump hanganalyze 3;
Statement processed.
SQL> oradebug dump hanganalyze 3;
Statement processed.
SQL> oradebug tracefile_name;

-- Hanganalyze trace 
Chains most likely to have caused the hang:
 [a] Chain 1 Signature: 'rdbms ipc message'<='log file switch (checkpoint incomplete)'
     Chain 1 Signature Hash: 0xfb14fb68
 [b] Chain 2 Signature: 'rdbms ipc message'<='write complete waits'<='buffer busy waits'
     Chain 2 Signature Hash: 0x768e7402
 [c] Chain 3 Signature: 'rdbms ipc message'<='log file switch (checkpoint incomplete)'<='buffer busy waits'
     Chain 3 Signature Hash: 0x4110dae0

同时可以快速做几个进程的short_stack,如CKPT

SQL> oradebug setorapid 60
Oracle pid: 60, Unix process pid: 8931, image: oracle@fwptdb01 (CKPT)
SQL> oradebug short_stack;
ksedsts()+380<-ksdxfstk()+52<-ksdxcb()+3592<-sspuser()+140<-__sighndlr()+12<-call_user_handler()+868<-sigacthandler()+92<-_syscall6()+32<-sskgpwwait()+236<-ksliwat()+1952<-kslwait()+240<-ksarcv()+212<-ksbabs()+764<-ksbrdp()+1720<-opirip()+1680<-opidrv()+748<-sou2o()+88<-opimai_real()+512<-ssthrdmain()+324<-main()+316<-_start()+380

这个问题是这个环境第一次出现,同时对数据库检查确认,AMM 未启用, ASMM 使用,并且SGA TARGET给了500G, buffer cache给了400G, shared pool 96G, 这么大的内存还是比较少, 不过大内存有大内存的麻烦。
并且确认已禁用了Solaris OS DISM.
当时手动多次checkpoint 都无法完成, 并且新加的日志组只能临时缓解还是无法reuse, 于时当晚重启了数据库实例,重启后恢复正常。
怀疑是BUG ,但从已知BUG 未找到。 但后期得知该数据库使用了FLASH BUFFER,flash buffer很少使用,并且BUG较多,与这个案例现象相同的有个bug,
Bug 22083366 CKPT hang blocked by flash buffer in RAC. 但无法确认, 仅记录一下,不过还是建议禁用flash buffer.

总结
因为当前的实例无法checkpoint ,导致在redo log 一直处于active状态,在所有日志组都active后,所有事务会挂起,新会话无法连接。常用的手段是确认日志组足够大;增长日志组;调优checkpoint;确保I/O 性能稳定或使用更快的I/O设备.或确认是否有相关BUG, 针对当前现象, 临时增加日志组争取时间,重启实例释放数据库的所有进程资源恢复。

打赏

,

对不起,这篇文章暂时关闭评论。