首页 » ORACLE 9i-23c » 荆棘载途 RAC安装之NFS篇

荆棘载途 RAC安装之NFS篇

这两天在NFS上配置一套RAC玩,实在没有好的机器就是三台PC,内存2G,每个PC还单块物理网卡,不过最终安装成功。下面记录安装过程中的一些小问题。

CENTOS5.8 X64 + ORACLE GRID /DATABASE 11203

虚拟一块物理网卡做private network,一台机器做NFS SERVER,两台PC分别为两个节点,两节点ORACLE_HOME是shared 模式(Oracle recommends local home),shared模式就是一套共享的ORACLE_HOME,占用空间小,看到一人说曾经做过安装6nodes 只花半小时,就是这种模式。

安装细节另参考其它站点。(oracle-base.com上一篇很好)

安装成功后如下。

[root@rac1 ~]# ifconfig -a
eth0      Link encap:Ethernet  HWaddr 00:24:1D:31:CC:75  
          inet addr:192.168.168.33  Bcast:192.168.168.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:24124544 errors:0 dropped:0 overruns:0 frame:0
          TX packets:28754493 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:7668946830 (7.1 GiB)  TX bytes:19450598151 (18.1 GiB)
          Interrupt:233 Base address:0xe000 

eth0:0    Link encap:Ethernet  HWaddr 00:24:1D:31:CC:75  
          inet addr:192.168.8.33  Bcast:192.168.8.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          Interrupt:233 Base address:0xe000 

eth0:1    Link encap:Ethernet  HWaddr 00:24:1D:31:CC:75  
          inet addr:169.254.82.195  Bcast:169.254.255.255  Mask:255.255.0.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          Interrupt:233 Base address:0xe000 

eth0:2    Link encap:Ethernet  HWaddr 00:24:1D:31:CC:75  
          inet addr:192.168.168.233  Bcast:192.168.168.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          Interrupt:233 Base address:0xe000 

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:77044 errors:0 dropped:0 overruns:0 frame:0
          TX packets:77044 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:31594833 (30.1 MiB)  TX bytes:31594833 (30.1 MiB)

[root@rac1 ~]# 
[root@rac1 ~]# cat /etc/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1              localhost.localdomain localhost

# 80  NFS SERVER
192.168.168.80          www.test.com test.com tel.test.com

192.168.168.33          rac1.localdomain rac1
192.168.168.34          rac2.localdomain rac2
192.168.8.33            rac1-priv.localdomain rac1-priv
192.168.8.34            rac2-priv.localdomain rac2-priv
192.168.168.233         rac1-vip.localdomain rac1-vip
192.168.168.234         rac2-vip.localdomain rac2-vip

问题一,安装GRID过程中root.sh遇PKI-02003,PKI-02002,
解决:
$ORACLE_HOME=/u01/app/11.2.0/grid

1./u01/app/11.2.0/grid/root.sh — on node2 (不确定是否是关键)
Configure Oracle Grid Infrastructure for a Cluster … succeeded

2./u01/app/11.2.0/grid/crs/install/roothas.pl -deconfig -force -verbose — on node1

3./u01/app/11.2.0/grid/root.sh –try again on node1

当时只是为了测试NODE2是不是也出现次问题,所以就先在node2上运行,所以也就引起了问题3.(ran root.sh in wrong node order)

问题二,DBCA遇
ORA-10997: another startup/shutdown operation of this instance inprogress
ORA-09968: unable to lock file
Linux Error: 37: No locks available

问题是NFS的一致性修改问题,只会出现在NFS上,解决方法是在NFS服务器端和客户端都启动NFS Lock服务
对NFS lock 的解释

The Network File System (NFS) is a client/server protocol that allows a directory hierarchy located on an NFS server to be mounted on one or more NFS clients. Once this is

done, the NFS client can transparently access the NFS server files. The NFS server-side daemons arbitrate simultaneous access by multiple clients. Entire files, or individual

regions of files, can be locked by a client to avoid race conditions caused by concurrent modification or by viewing partial updates.

[root@rac1 ~]# /etc/init.d/nfslock  start
启动 NFS statd:                                           [确定]
[root@rac1 ~]# chkconfig --list nfslock
nfslock         0:关闭  1:关闭  2:关闭  3:关闭  4:关闭  5:关闭  6:关闭
[root@rac1 ~]# chkconfig --level 35 nfslock on
[root@rac1 ~]# chkconfig --list nfslock       
nfslock         0:关闭  1:关闭  2:关闭  3:启用  4:关闭  5:启用  6:关闭

DBCA建库成功后检查一下RAC状态。

[oracle@rac1 database]$ crsctl check cluster
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online

[oracle@rac1 database]$  srvctl status nodeapps
VIP rac1-vip 已启用
VIP rac1-vip 正在节点上运行: rac1
VIP rac2-vip 已启用
VIP rac2-vip 正在节点上运行: rac2
网络已启用
网络正在节点上运行: rac2
网络正在节点上运行: rac1
GSD 已禁用
GSD 没有运行的节点: rac2
GSD 没有运行的节点: rac1
ONS 已启用
ONS 守护程序正在节点上运行:rac2
ONS 守护程序正在节点上运行:rac1


[oracle@rac1 database]$ crsctl status resource -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER.lsnr
               ONLINE  ONLINE       rac1                                         
               ONLINE  ONLINE       rac2                                         
ora.asm
               OFFLINE OFFLINE      rac1                                         
               OFFLINE OFFLINE      rac2                     Instance Shutdown   
ora.gsd
               OFFLINE OFFLINE      rac1                                         
               OFFLINE OFFLINE      rac2                                         
ora.net1.network
               ONLINE  ONLINE       rac1                                         
               ONLINE  ONLINE       rac2                                         
ora.ons
               ONLINE  ONLINE       rac1                                         
               ONLINE  ONLINE       rac2                                         
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac2                                         
ora.cvu
      1        ONLINE  ONLINE       rac2                                         
ora.oc4j
      1        ONLINE  ONLINE       rac2                                         
ora.rac.db
      1        ONLINE  ONLINE       rac1                     Open                
      2        ONLINE  ONLINE       rac2                     Open                
ora.rac1.vip
      1        ONLINE  ONLINE       rac1                                         
ora.rac2.vip
      1        ONLINE  ONLINE       rac2                                         
ora.scan1.vip
      1        ONLINE  ONLINE       rac2  


[oracle@rac1 database]$ srvctl status database -d rac
实例 rac2 正在节点 rac1 上运行
实例 rac1 正在节点 rac2 上运行

问题三,实例与节点顺序是交叉的。实例2在节点1上。

解决方法

[oracle@rac1 ~]$ srvctl stop database -d rac
[oracle@rac1 ~]$ srvctl modify instance -d rac -i rac1 -n rac1
PRCD-1069 : 无法修改数据库 rac 的节点 rac1 上的实例 rac1 
PRCD-1051 : 无法将实例添加到数据库 rac
PRCD-1023 : 实例已添加到数据库rac的节点rac1
[oracle@rac1 ~]$ srvctl remove instance -d rac -i rac1
是否从数据库 rac 中删除实例? (y/[n]) yes
PRKO-2007 : 实例名无效: rac1
[oracle@rac1 ~]$ srvctl modify instance -d rac -i rac1 -n rac1
PRKO-3032 : 无效的实例名: rac1
[oracle@rac1 ~]$ srvctl modify instance -d rac -i rac2 -n rac2 
[oracle@rac1 ~]$ srvctl add instance -d rac -i rac1 -n rac1
[oracle@rac1 ~]$ srvctl status database -d rac
实例 rac1 没有在 rac1 节点上运行
实例 rac2 没有在 rac2 节点上运行
[oracle@rac1 ~]$ srvctl start database -d rac

[oracle@rac1 ~]$ srvctl status database -d rac
实例 rac1 正在节点 rac1 上运行
实例 rac2 正在节点 rac2 上运行

SQL> select inst_id,instance_name,status,database_status from gv$instance;

   INST_ID INSTANCE_NAME    STATUS       DATABASE_STATUS
---------- ---------------- ------------ -----------------
         1 rac1             OPEN         ACTIVE
         2 rac2             OPEN         ACTIVE

总结:
如果不是太无聊不要让费保贵的时间。

打赏

对不起,这篇文章暂时关闭评论。