首页 » ORACLE 9i-23ai, 系统相关 » Oracle crash on VMware , OS watchdog show “io_schedule” “__blockdev_direct_IO_newtrunc”

Oracle crash on VMware , OS watchdog show “io_schedule” “__blockdev_direct_IO_newtrunc”

最近同事遇到一个案例,Oracle数据库频繁重启,数据库日志也是各种后台进程IO hang, 环境VMWARE,RHEL6, multipath, 华为存储。iostat显示问题时间段共享存储设备io 为0,%util 100%. 有几个相似案例,从操作系统WATCHDOG的日志中的call stack trace给出几点建议方向,这里简单的记录。

操作系统日志显示

Nov 11 20:46:24 anbob1 kernel: oracle D 0000000000000001 0 5265 1 0x00000084
Nov 11 20:46:24 anbob1 kernel: ffff882ed3b27b18 0000000000000086 0000000000000000 ffff8817a8a56800
Nov 11 20:46:24 anbob1 kernel: ffffea009f764e38 000000007b53e2c5 0000000000000000 ffff881291cac080
Nov 11 20:46:24 anbob1 kernel: ffff882f17ef85f8 ffff882ed3b27fd8 000000000000fb88 ffff882f17ef85f8
Nov 11 20:46:24 anbob1 kernel: Call Trace:
Nov 11 20:46:24 anbob1 kernel: [<ffffffff814fdfc3>] io_schedule+0x73/0xc0
Nov 11 20:46:24 anbob1 kernel: [<ffffffff811b663e>] __blockdev_direct_IO_newtrunc+0x6fe/0xb90
Nov 11 20:46:24 anbob1 kernel: [<ffffffff8118fec0>] ? pollwake+0x0/0x60
Nov 11 20:46:24 anbob1 kernel: [<ffffffff811b6b2e>] __blockdev_direct_IO+0x5e/0xd0
Nov 11 20:46:24 anbob1 kernel: [<ffffffff811b33e0>] ? blkdev_get_blocks+0x0/0xc0
Nov 11 20:46:24 anbob1 kernel: [<ffffffff811b4247>] blkdev_direct_IO+0x57/0x60
Nov 11 20:46:24 anbob1 kernel: [<ffffffff811b33e0>] ? blkdev_get_blocks+0x0/0xc0
Nov 11 20:46:24 anbob1 kernel: [<ffffffff81115fcb>] generic_file_aio_read+0x6bb/0x700
Nov 11 20:46:24 anbob1 kernel: [<ffffffff814278b3>] ? move_addr_to_user+0x93/0xb0
Nov 11 20:46:24 anbob1 kernel: [<ffffffff8117aeaa>] do_sync_read+0xfa/0x140
Nov 11 20:46:24 anbob1 kernel: [<ffffffff810920d0>] ? autoremove_wake_function+0x0/0x40
Nov 11 20:46:24 anbob1 kernel: [<ffffffff8121fd8b>] ? selinux_file_permission+0xfb/0x150
Nov 11 20:46:24 anbob1 kernel: [<ffffffff81213136>] ? security_file_permission+0x16/0x20
Nov 11 20:46:24 anbob1 kernel: [<ffffffff8117b8b5>] vfs_read+0xb5/0x1a0
Nov 11 20:46:24 anbob1 kernel: [<ffffffff810d69e2>] ? audit_syscall_entry+0x272/0x2a0
Nov 11 20:46:24 anbob1 kernel: [<ffffffff8117bbe2>] sys_pread64+0x82/0xa0
Nov 11 20:46:24 anbob1 kernel: [<ffffffff8100b0f2>] system_call_fastpath+0x16/0x1b

建议:
请与 VMware 支持联系以验证是否存在虚拟机管理程序级别的任何问题,导致 VMware 虚拟磁盘上的 IO 请求无法完成?管理程序日志中是否有关于虚拟磁盘的任何错误消息?
Oracle 进程卡在 UN 状态,等待 VMware 虚拟磁盘上(或共享存储)的 IO 完成,但由于与 VMware 虚拟磁盘(或共享存储)的连接问题,该进程无法完成 IO。 由于启用了 kernel.hung_task_panic 标志,一旦 khungtaskd 内核线程发现上述 Oracle 进程卡在 UN 状态超过 120 秒,它就会使系统崩溃。

— 原文 www.anbob.com 转载注明出处—

而如果日志是下面的情况,可能是多路径原因

ul 9 16:56:42,node1,[daemon.notice],multipathd:, 36006016061602d008296e08eb281e211: remaining active paths: 0
[...]
Jul 9 17:00:12,node1,[kern.err],kernel:,INFO: task qdiskd:24750 blocked for more than 120 seconds.
Jul 9 17:00:12,node1,[kern.err],kernel:,"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jul 9 17:00:12,node1,[kern.info],kernel:,qdiskd D 0000000000000004 0 24750 1 0x00000000
Jul 9 17:00:12,node1,[kern.warning],kernel:, ffff8804706dfb18 0000000000000082 0000000000000000 ffffffffa00041fc
Jul 9 17:00:12,node1,[kern.warning],kernel:, ffff8804706dfae8 00000000e4bd82fc ffff8804706dfb08 ffff880875e5ad80
Jul 9 17:00:12,node1,[kern.warning],kernel:, ffff8804726ad0f8 ffff8804706dffd8 000000000000f4e8 ffff8804726ad0f8
Jul 9 17:00:12,node1,[kern.warning],kernel:,Call Trace:
Jul 9 17:00:12,node1,[kern.warning],kernel:, [] ? dm_table_unplug_all+0x5c/0x100 [dm_mod]
Jul 9 17:00:12,node1,[kern.warning],kernel:, [] io_schedule+0x73/0xc0
Jul 9 17:00:12,node1,[kern.warning],kernel:, [] __blockdev_direct_IO_newtrunc+0x6fe/0xb90
Jul 9 17:00:12,node1,[kern.warning],kernel:, [] __blockdev_direct_IO+0x5e/0xd0
Jul 9 17:00:12,node1,[kern.warning],kernel:, [] ? blkdev_get_blocks+0x0/0xc0
Jul 9 17:00:12,node1,[kern.warning],kernel:, [] blkdev_direct_IO+0x57/0x60
Jul 9 17:00:12,node1,[kern.warning],kernel:, [] ? blkdev_get_blocks+0x0/0xc0
Jul 9 17:00:12,node1,[kern.warning],kernel:, [] generic_file_aio_read+0x6bb/0x700
Jul 9 17:00:12,node1,[kern.warning],kernel:, [] ? futex_requeue+0x310/0x890
Jul 9 17:00:12,node1,[kern.warning],kernel:, [] do_sync_read+0xfa/0x140
Jul 9 17:00:12,node1,[kern.warning],kernel:, [] ? autoremove_wake_function+0x0/0x40
Jul 9 17:00:12,node1,[kern.warning],kernel:, [] ? unmap_region+0x110/0x130
Jul 9 17:00:12,node1,[kern.warning],kernel:, [] ? security_file_permission+0x16/0x20
Jul 9 17:00:12,node1,[kern.warning],kernel:, [] vfs_read+0xb5/0x1a0
Jul 9 17:00:12,node1,[kern.warning],kernel:, [] sys_read+0x51/0x90
Jul 9 17:00:12,node1,[kern.warning],kernel:, [] system_call_fastpath+0x16/0x1b

建议:
似乎设备以某种方式被完全删除,从而导致 scsi_dh_alua 的 SCSI_DH_NOSYS 错误以及卡在 dm_table_unplug_all() 中的 qdisk 问题。主要假设是删除设备实际上是问题所在,一个可能的原因可能是在 RHEL 6 中,当路径丢失时,设备被删除。将 dev_loss_tmo 更改为“infinity”并赋予 fast_io_fail_tmo 值“30”似乎可以解决问题。

而如果是下面的

Apr 26 02:25:01 kernel: INFO: task lvcreate:15067 blocked for more than 120 seconds.
Apr 26 02:25:01 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Apr 26 02:25:01 kernel: lvcreate      D 0000000000000000     0 15067  15066 0x00000080
Apr 26 02:25:01 kernel: ffff8801b385bac8 0000000000000082 ffff8801b385ba88 ffffffffa00043fc
Apr 26 02:25:01 kernel: ffff8801b385ba98 00000000628b5d4f ffff8801b385bab8 ffff880194e7d180
Apr 26 02:25:01 kernel: ffff88061fd51ab8 ffff8801b385bfd8 000000000000fb88 ffff88061fd51ab8
Apr 26 02:25:01 kernel: Call Trace:
Apr 26 02:25:01 kernel: [<ffffffffa00043fc>] ? dm_table_unplug_all+0x5c/0x100 [dm_mod]
Apr 26 02:25:01 kernel: [<ffffffff810a2431>] ? ktime_get_ts+0xb1/0xf0
Apr 26 02:25:01 kernel: [<ffffffff8150e8c3>] io_schedule+0x73/0xc0
Apr 26 02:25:01 kernel: [<ffffffff811bed0e>] __blockdev_direct_IO_newtrunc+0x6de/0xb30
Apr 26 02:25:01 kernel: [<ffffffff811bf1be>] __blockdev_direct_IO+0x5e/0xd0
Apr 26 02:25:01 kernel: [<ffffffff811bb590>] ? blkdev_get_blocks+0x0/0xc0
Apr 26 02:25:01 kernel: [<ffffffff811bc657>] blkdev_direct_IO+0x57/0x60
Apr 26 02:25:01 kernel: [<ffffffff811bb590>] ? blkdev_get_blocks+0x0/0xc0
Apr 26 02:25:01 kernel: [<ffffffff8111bcab>] generic_file_aio_read+0x6bb/0x700
Apr 26 02:25:01 kernel: [<ffffffff812235d1>] ? avc_has_perm+0x71/0x90
Apr 26 02:25:01 kernel: [<ffffffff811bd029>] ? __blkdev_get+0x1a9/0x3b0
Apr 26 02:25:01 kernel: [<ffffffff811bbba3>] blkdev_aio_read+0x53/0xc0
Apr 26 02:25:01 kernel: [<ffffffff811811aa>] do_sync_read+0xfa/0x140
Apr 26 02:25:01 kernel: [<ffffffff81096da0>] ? autoremove_wake_function+0x0/0x40
Apr 26 02:25:01 kernel: [<ffffffff810d9e7d>] ? audit_filter_rules+0x2d/0xdd0
Apr 26 02:25:01 kernel: [<ffffffff81228ffb>] ? selinux_file_permission+0xfb/0x150
Apr 26 02:25:01 kernel: [<ffffffff8121bed6>] ? security_file_permission+0x16/0x20
Apr 26 02:25:01 kernel: [<ffffffff81181a95>] vfs_read+0xb5/0x1a0
Apr 26 02:25:01 kernel: [<ffffffff81181bd1>] sys_read+0x51/0x90
Apr 26 02:25:01 kernel: [<ffffffff810dc685>] ? __audit_syscall_exit+0x265/0x290
Apr 26 02:25:01 kernel: [<ffffffff8100b072>] system_call_fastpath+0x16/0x1b

建议:
恢复挂起的设备。
# dmsetup resume <设备>
建议禁用 vmtoolsd 服务。

打赏

对不起,这篇文章暂时关闭评论。