首页 » ORACLE 9i-23c, 系统相关 » Oracle 11g R2 RAC addnode (增加RAC节点) 实践和注意事项

Oracle 11g R2 RAC addnode (增加RAC节点) 实践和注意事项

11G R2 RAC增加节点两种方法1,用EM;2,手动

撞见了很多问题,下面是手动安装步骤

1,配置OS环境
hosts  file和 dns 服务器地址
创建grid   、oracle 安装目录,配置权限
配置系统内核参数
配置grid 、oracle user 的SSH互认
2,安装GRID (clusterware)到第三个节点
3,安装ORACLE RDBMS SOFTWARE到第三个节点
4, 增加oracle instance 到第三个节点

let’s go

--1, Modify /etc/hosts on all nodes,Add the public and private IP addresses of the third node in /etc/hosts
[root@znode3 ~]# vi /etc/hosts

# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1        localhost.localdomain localhost
::1              localhost6.localdomain6 localhost6

#for ORACLE
#public network
192.168.168.191 znode1.anbob.com  znode1
192.168.168.193 znode2.anbob.com  znode2
192.168.168.195 znode3.anbob.com  znode3

#vip network
192.168.168.192 znode1-vip.anbob.com znode1-vip
192.168.168.194 znode2-vip.anbob.com znode2-vip
192.168.168.196 znode3-vip.anbob.com znode3-vip

#private network
192.168.2.191 znode1-priv.anbob.com znode1-priv
192.168.2.193 znode2-priv.anbob.com znode2-priv
192.168.2.195 znode3-priv.anbob.com znode3-priv

#scan use DNS
~

2,disable NTP Service on third node,USE CTSS of GI(If you configure the cluster with no ntp)
[root@znode3 ~]# /sbin/service ntpd stop
[root@znode3 ~]# chkconfig ntpd off
[root@znode3 ~]# mv /etc/ntp.conf  /etc/ntp.conf.org

3,conf OS kernel parameters ,copy conf file from znode2 to znode3
[root@znode2 bin]# scp /etc/sysctl.conf 192.168.168.195:/etc/sysctl.conf
[root@znode2 bin]# scp  /etc/security/limits.conf znode3:/etc/security/limits.conf
[root@znode2 bin]# scp /etc/pam.d/login znode3:/etc/pam.d/login
--on third node
sysctl -p

4,create directory for grid and oracle software on third node

[root@znode3 u01]# mkdir -p /u01/app/11.2.0/grid
[root@znode3 u01]# mkdir -p /u01/app/grid
[root@znode3 u01]# chown -R grid:oinstall /u01
[root@znode3 u01]# mkdir /u01/app/oracle
[root@znode3 u01]# chown oracle:oinstall /u01/app/oracle
[root@znode3 u01]# chmod -R 775 /u01/

5,config grid and oracle user environment variable,add to ~/.bash_profile
su - grid
export ORACLE_HOME=/u01/app/11.2.0/grid
PATH=$ORACLE_HOME/bin:$PATH ; export PATH

su - oracle
export ORACLE_SID=rac3
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=/u01/app/oracle/11.2.0/db1
export ORACLE_TERM=vt100
export PATH=$ORACLE_HOME/bin:$PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib

6,mount the shared device on  third node with udev (same as znode1,znode2)
[root@znode2 bin]# scp  /etc/udev/rules.d/60-raw.rules  znode3:/etc/udev/rules.d/60-raw.rules
--node znode3
[grid@znode3 ~]$ cat /etc/udev/rules.d/60-raw.rules 

ACTION=="add", KERNEL=="sdb1", RUN+="/bin/raw /dev/raw/raw1 %N"
ACTION=="add", KERNEL=="sdb2", RUN+="/bin/raw /dev/raw/raw2 %N"
ACTION=="add", KERNEL=="sdb3", RUN+="/bin/raw /dev/raw/raw3 %N"
ACTION=="add", KERNEL=="sdc1", RUN+="/bin/raw /dev/raw/raw4 %N"
ACTION=="add", KERNEL=="sdd1", RUN+="/bin/raw /dev/raw/raw5 %N"
ACTION=="add", KERNEL=="sde1", RUN+="/bin/raw /dev/raw/raw6 %N"

ACTION=="add", KERNEL=="raw[1-6]", OWNER="grid", GROUP="asmadmin",
 MODE="0660"
start_udev
[grid@znode3 ~]$ ls -l  /dev/raw/raw*
crw-rw---- 1 grid asmadmin 162, 1 Jul 26 12:01 /dev/raw/raw1
crw-rw---- 1 grid asmadmin 162, 2 Jul 26 12:01 /dev/raw/raw2
crw-rw---- 1 grid asmadmin 162, 3 Jul 26 12:01 /dev/raw/raw3
crw-rw---- 1 grid asmadmin 162, 4 Jul 26 12:01 /dev/raw/raw4
crw-rw---- 1 grid asmadmin 162, 5 Jul 26 12:01 /dev/raw/raw5
crw-rw---- 1 grid asmadmin 162, 6 Jul 26 12:01 /dev/raw/raw6

7,Establish user equivalence with SSH,there are tow ways
[grid@znode1 .ssh]$ cd /u01/app/11.2.0/grid/deinstall/
[grid@znode1 deinstall]$ ./sshUserSetup.sh -user grid -hosts "znode1 znode2 znode3"
[grid@znode1 ~]$ cd  ~/.ssh
[grid@znode1 .ssh]$ rm *.bak
[grid@znode1 .ssh]$ rm *.backup
[grid@znode1 .ssh]$ scp * znode2:~/.ssh
[grid@znode1 .ssh]$ scp * znode3:~/.ssh

----------------The following method is more simple-------
[oracle@znode1 bin]$ pwd
/u01/app/11.2.0/grid/oui/bin
[oracle@znode1 bin]$ ./runSSHSetup.sh -user oracle -hosts 'znode1 znode2 znode3' -advanced -exverify

8,Verify the integrity of the cluster and the new node
 [grid@znode1 deinstall]$ which cluvfy
/u01/app/11.2.0/grid/bin/cluvfy
[grid@znode1 deinstall]$ cluvfy stage -pre nodeadd -n znode3 -verbose

9,Install the Oracle Database 11g Release 2 Grid Infrastructure software on the third node

---use GNS
[grid@znode1 bin]$ ./addNode.sh "CLUSTER_NEW_NODES={znode3}"
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 3927 MB    Passed
Oracle Universal Installer, Version 11.2.0.3.0 Production
Copyright (C) 1999, 2011, Oracle. All rights reserved.

SEVERE:Cannot perform add node procedure as the value of CLUSTER_NEW_VIRTUAL_HOSTNAMES or CLUSTER_NEW_NODES or both could not be obtained from the command line or response file(s). Silent install cannot continue.

-- use DNS

[grid@znode1 bin]$ ./addNode.sh -silent "CLUSTER_NEW_NODES={znode3}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={znode3-vip}"

Performing pre-checks for node addition 

Checking node reachability...
Node reachability check passed from node "znode1"

Checking user equivalence...
User equivalence check passed for user "grid"

WARNING:
Node "znode3" already appears to be part of cluster

Pre-check for node addition was successful.
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 3537 MB    Passed
Oracle Universal Installer, Version 11.2.0.3.0 Production
Copyright (C) 1999, 2011, Oracle. All rights reserved.

Performing tests to see whether nodes znode2,znode3 are available
............................................................... 100% Done.

.
-----------------------------------------------------------------------------
Cluster Node Addition Summary
Global Settings
   Source: /u01/app/11.2.0/grid
   New Nodes
Space Requirements
   New Nodes
      znode3
         /: Required 4.16GB : Available 9.52GB
Installed Products
   Product Names
      Oracle Grid Infrastructure 11.2.0.3.0
      Sun JDK 1.5.0.30.03
      Installer SDK Component 11.2.0.3.0
      Oracle One-Off Patch Installer 11.2.0.1.7
      Oracle Universal Installer 11.2.0.3.0
      Oracle USM Deconfiguration 11.2.0.3.0
      Oracle Configuration Manager Deconfiguration 10.3.1.0.0
      Enterprise Manager Common Core Files 10.2.0.4.4
      Oracle DBCA Deconfiguration 11.2.0.3.0
      Oracle RAC Deconfiguration 11.2.0.3.0
      Oracle Quality of Service Management (Server) 11.2.0.3.0
      Installation Plugin Files 11.2.0.3.0
      Universal Storage Manager Files 11.2.0.3.0
      Oracle Text Required Support Files 11.2.0.3.0
      Automatic Storage Management Assistant 11.2.0.3.0
      Oracle Database 11g Multimedia Files 11.2.0.3.0
      Oracle Multimedia Java Advanced Imaging 11.2.0.3.0
      Oracle Globalization Support 11.2.0.3.0
      Oracle Multimedia Locator RDBMS Files 11.2.0.3.0
      Oracle Core Required Support Files 11.2.0.3.0
      Bali Share 1.1.18.0.0
      Oracle Database Deconfiguration 11.2.0.3.0
      Oracle Quality of Service Management (Client) 11.2.0.3.0
      Expat libraries 2.0.1.0.1
      Oracle Containers for Java 11.2.0.3.0
      Perl Modules 5.10.0.0.1
      Secure Socket Layer 11.2.0.3.0
      Oracle JDBC/OCI Instant Client 11.2.0.3.0
      Assistant Common Files 11.2.0.3.0
      PL/SQL 11.2.0.3.0
      HAS Files for DB 11.2.0.3.0
...
      Oracle Netca Client 11.2.0.3.0
      Oracle Net 11.2.0.3.0
      Oracle JVM 11.2.0.3.0
      Oracle Internet Directory Client 11.2.0.3.0
      Oracle Net Listener 11.2.0.3.0
      Cluster Ready Services Files 11.2.0.3.0
      Oracle Database 11g 11.2.0.3.0
-----------------------------------------------------------------------------

Instantiating scripts for add node (Tuesday, July 24, 2012 2:08:38 PM CST)
.                                                                 1% Done.
Instantiation of add node scripts complete

Copying to remote nodes (Tuesday, July 24, 2012 2:08:41 PM CST)
...............................................................................................                                 96% Done.
Home copied to new nodes

Saving inventory on nodes (Tuesday, July 24, 2012 2:12:44 PM CST)
.                                                               100% Done.
Save inventory complete
WARNING:
The following configuration scripts need to be executed as the "root" user in each new cluster node. Each script in the list below is followed by a list of nodes.
/u01/app/11.2.0/grid/root.sh #On nodes znode3
To execute the configuration scripts:
    1. Open a terminal window
    2. Log in as "root"
    3. Run the scripts in each cluster node

The Cluster Node Addition of /u01/app/11.2.0/grid was successful.
Please check '/soft/temp/silentInstall.log' for more details.


[root@znode3 grid]# /u01/app/oraInventory/orainstRoot.sh
Creating the Oracle inventory pointer file (/etc/oraInst.loc)
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.

[root@znode3 ~]# /u01/app/11.2.0/grid/root.sh
Performing root user operation for Oracle 11g 

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/app/11.2.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]: 
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
        LANGUAGE = (unset),
        LC_ALL = (unset),
        LANG = "en_US.ZHS16GBK"
    are supported and installed on your system.
perl: warning: Falling back to the standard locale ("C").
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
User ignored Prerequisites during installation
OLR initialization - successful
Adding Clusterware entries to inittab
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node znode1, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 11g Release 2.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Preparing packages for installation...
cvuqdisk-1.0.9-1
Configure Oracle Grid Infrastructure for a Cluster ... succeeded

[root@znode3 ~]# /u01/app/11.2.0/grid/bin/crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online

TIP:
if root.sh get fail, You can execute the following script,and re-run root.sh
/u01/app/11.2.0/grid/crs/install/rootcrs.pl -deconfig -force

10,verify grid(clusterware) installation
[grid@znode3 ~]$ crs_stat -t
Name           Type           Target    State     Host        
------------------------------------------------------------
ora.CRSDG.dg   ora....up.type ONLINE    ONLINE    znode1      
ora.DBDG.dg    ora....up.type ONLINE    ONLINE    znode1      
ora.FLRV.dg    ora....up.type ONLINE    ONLINE    znode1      
ora....ER.lsnr ora....er.type ONLINE    ONLINE    znode1      
ora....N1.lsnr ora....er.type ONLINE    ONLINE    znode1      
ora....N2.lsnr ora....er.type ONLINE    ONLINE    znode3      
ora....N3.lsnr ora....er.type ONLINE    ONLINE    znode2      
ora.asm        ora.asm.type   ONLINE    ONLINE    znode1      
ora.cvu        ora.cvu.type   ONLINE    ONLINE    znode2      
ora.gsd        ora.gsd.type   OFFLINE   OFFLINE               
ora....network ora....rk.type ONLINE    ONLINE    znode1      
ora.oc4j       ora.oc4j.type  ONLINE    ONLINE    znode2      
ora.ons        ora.ons.type   ONLINE    ONLINE    znode1      
ora.rac.db     ora....se.type ONLINE    ONLINE    znode2      
ora.scan1.vip  ora....ip.type ONLINE    ONLINE    znode1      
ora.scan2.vip  ora....ip.type ONLINE    ONLINE    znode3      
ora.scan3.vip  ora....ip.type ONLINE    ONLINE    znode2      
ora....SM1.asm application    ONLINE    ONLINE    znode1      
ora....E1.lsnr application    ONLINE    ONLINE    znode1      
ora.znode1.gsd application    OFFLINE   OFFLINE               
ora.znode1.ons application    ONLINE    ONLINE    znode1      
ora.znode1.vip ora....t1.type ONLINE    ONLINE    znode1      
ora....SM2.asm application    ONLINE    ONLINE    znode2      
ora....E2.lsnr application    ONLINE    ONLINE    znode2      
ora.znode2.gsd application    OFFLINE   OFFLINE               
ora.znode2.ons application    ONLINE    ONLINE    znode2      
ora.znode2.vip ora....t1.type ONLINE    ONLINE    znode2      
ora....SM3.asm application    ONLINE    ONLINE    znode3      
ora....E3.lsnr application    ONLINE    ONLINE    znode3      
ora.znode3.gsd application    OFFLINE   OFFLINE               
ora.znode3.ons application    ONLINE    ONLINE    znode3      
ora.znode3.vip ora....t1.type ONLINE    ONLINE    znode3    


[grid@znode3 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.CRSDG.dg
               ONLINE  ONLINE       znode1                                       
               ONLINE  ONLINE       znode2                                       
               ONLINE  ONLINE       znode3                                       
ora.DBDG.dg
               ONLINE  ONLINE       znode1                                       
               ONLINE  ONLINE       znode2                                       
               ONLINE  ONLINE       znode3                                       
ora.FLRV.dg
               ONLINE  ONLINE       znode1                                       
               ONLINE  ONLINE       znode2                                       
               ONLINE  ONLINE       znode3                                       
ora.LISTENER.lsnr
               ONLINE  ONLINE       znode1                                       
               ONLINE  ONLINE       znode2                                       
               ONLINE  ONLINE       znode3                                       
ora.asm
               ONLINE  ONLINE       znode1                   Started             
               ONLINE  ONLINE       znode2                   Started             
               ONLINE  ONLINE       znode3                   Started             
ora.gsd
               OFFLINE OFFLINE      znode1                                       
               OFFLINE OFFLINE      znode2                                       
               OFFLINE OFFLINE      znode3                                       
ora.net1.network
               ONLINE  ONLINE       znode1                                       
               ONLINE  ONLINE       znode2                                       
               ONLINE  ONLINE       znode3                                       
ora.ons
               ONLINE  ONLINE       znode1                                       
               ONLINE  ONLINE       znode2                                       
               ONLINE  ONLINE       znode3                                       
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       znode1                                       
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       znode3                                       
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       znode2                                       
ora.cvu
      1        ONLINE  ONLINE       znode2                                       
ora.oc4j
      1        ONLINE  ONLINE       znode2                                       
ora.rac.db
      1        ONLINE  ONLINE       znode1                   Open                
      2        ONLINE  ONLINE       znode2                   Open                
ora.scan1.vip
      1        ONLINE  ONLINE       znode1                                       
ora.scan2.vip
      1        ONLINE  ONLINE       znode3                                       
ora.scan3.vip
      1        ONLINE  ONLINE       znode2                                       
ora.znode1.vip
      1        ONLINE  ONLINE       znode1                                       
ora.znode2.vip
      1        ONLINE  ONLINE       znode2                                       
ora.znode3.vip
      1        ONLINE  ONLINE       znode3               
      
[grid@znode3 ~]$ srvctl status nodeapps
VIP znode1-vip is enabled
VIP znode1-vip is running on node: znode1
VIP znode2-vip is enabled
VIP znode2-vip is running on node: znode2
VIP znode3-vip is enabled
VIP znode3-vip is running on node: znode3
Network is enabled
Network is running on node: znode1
Network is running on node: znode2
Network is running on node: znode3
GSD is disabled
GSD is not running on node: znode1
GSD is not running on node: znode2
GSD is not running on node: znode3
ONS is enabled
ONS daemon is running on node: znode1
ONS daemon is running on node: znode2
ONS daemon is running on node: znode3

tip:
GSD and ora.gsd 在10g以后已被抛弃

[grid@znode3 ~]$ srvctl config nodeapps
Network exists: 1/192.168.168.0/255.255.255.0/eth0, type static
VIP exists: /znode1-vip/192.168.168.192/192.168.168.0/255.255.255.0/eth0, hosting node znode1
VIP exists: /znode2-vip/192.168.168.194/192.168.168.0/255.255.255.0/eth0, hosting node znode2
VIP exists: /znode3-vip/192.168.168.196/192.168.168.0/255.255.255.0/eth0, hosting node znode3
GSD exists
ONS exists: Local port 6100, remote port 6200, EM port 2016

[grid@znode3 ~]$ crsctl check cluster -all
**************************************************************
znode1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
znode2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
znode3:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************

[grid@znode3 ~]$ srvctl status scan_listener
SCAN Listener LISTENER_SCAN1 is enabled
SCAN listener LISTENER_SCAN1 is running on node znode1
SCAN Listener LISTENER_SCAN2 is enabled
SCAN listener LISTENER_SCAN2 is running on node znode3
SCAN Listener LISTENER_SCAN3 is enabled
SCAN listener LISTENER_SCAN3 is running on node znode2

[grid@znode3 ~]$ srvctl status database -d rac
Instance rac1 is running on node znode1
Instance rac2 is running on node znode2

11,Install the Oracle Database 11g Release 2 software on the third node
[root@znode3 ~]# su - oracle
[oracle@znode3 ~]$ echo $SHELL
/bin/bash

[oracle@znode3 ~]$ vi .bash_profile
--append--
export ORACLE_SID=rac3
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=/u01/app/oracle/11.2.0/db1
export ORACLE_TERM=vt100
export PATH=$ORACLE_HOME/bin:$PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib


[root@znode1 ~]# su - oracle
[oracle@znode1 ~]$ ssh znode3
Last login: Wed Jul 25 09:19:01 2012 from znode1.anbob.com
[oracle@znode3 ~]$ exit
logout

[oracle@znode1 ~]$ cd $ORACLE_HOME/oui/bin
[oracle@znode1 bin]$ export IGNORE_PREADDNODE_CHECKS=Y
[oracle@znode1 bin]$ ./addNode.sh -silent "CLUSTER_NEW_NODES={znode3}"   

Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 4094 MB    Passed
Oracle Universal Installer, Version 11.2.0.3.0 Production
Copyright (C) 1999, 2011, Oracle. All rights reserved.


Performing tests to see whether nodes znode2,znode3 are available
............................................................... 100% Done.

..
-----------------------------------------------------------------------------
Cluster Node Addition Summary
Global Settings
   Source: /u01/app/oracle/11.2.0/db1
   New Nodes
Space Requirements
   New Nodes
      znode3
         /: Required 3.80GB : Available 6.62GB
Installed Products
   Product Names
      Oracle Database 11g 11.2.0.3.0 
      Sun JDK 1.5.0.30.03 
      Installer SDK Component 11.2.0.3.0 
      Oracle One-Off Patch Installer 11.2.0.1.7 
      Oracle Universal Installer 11.2.0.3.0 
      Oracle USM Deconfiguration 11.2.0.3.0 
      Oracle Configuration Manager Deconfiguration 10.3.1.0.0 
      Oracle DBCA Deconfiguration 11.2.0.3.0 
      Oracle RAC Deconfiguration 11.2.0.3.0 
      Oracle Database Deconfiguration 11.2.0.3.0 
      Oracle Configuration Manager Client 10.3.2.1.0 
      Oracle Configuration Manager 10.3.5.0.1 
      Oracle ODBC Driverfor Instant Client 11.2.0.3.0 
      LDAP Required Support Files 11.2.0.3.0 
      SSL Required Support Files for InstantClient 11.2.0.3.0 
      Bali Share 1.1.18.0.0 
  ...
      Oracle Net 11.2.0.3.0 
      Oracle XML Development Kit 11.2.0.3.0 
      Database Configuration and Upgrade Assistants 11.2.0.3.0 
      Oracle JVM 11.2.0.3.0 
      Oracle Advanced Security 11.2.0.3.0 
      Oracle Internet Directory Client 11.2.0.3.0 
      Oracle Enterprise Manager Console DB 11.2.0.3.0 
      HAS Files for DB 11.2.0.3.0 
      Oracle Net Listener 11.2.0.3.0 
      Oracle Text 11.2.0.3.0 
      Oracle Net Services 11.2.0.3.0 
      Oracle Database 11g 11.2.0.3.0 
      Oracle OLAP 11.2.0.3.0 
      Oracle Spatial 11.2.0.3.0 
      Oracle Partitioning 11.2.0.3.0 
      Enterprise Edition Options 11.2.0.3.0 
-----------------------------------------------------------------------------


Instantiating scripts for add node (Wednesday, July 25, 2012 1:36:15 PM CST)
.                                                                 1% Done.
Instantiation of add node scripts complete

Copying to remote nodes (Wednesday, July 25, 2012 1:36:25 PM CST)
...............................................................................................                                 96% Done.
Home copied to new nodes

Saving inventory on nodes (Wednesday, July 25, 2012 1:52:43 PM CST)
.                                                               100% Done.
Save inventory complete
WARNING:
The following configuration scripts need to be executed as the "root" user in each new cluster node. Each script in the list below is followed by a list of nodes.
/u01/app/oracle/11.2.0/db1/root.sh #On nodes znode3
To execute the configuration scripts:
    1. Open a terminal window
    2. Log in as "root"
    3. Run the scripts in each cluster node
    
The Cluster Node Addition of /u01/app/oracle/11.2.0/db1 was successful.
Please check '/temp/silentInstall.log' for more details.
[oracle@znode1 bin]$ 

 
[root@znode3 ~]# /u01/app/oracle/11.2.0/db1/root.sh
Performing root user operation for Oracle 11g 

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/oracle/11.2.0/db1

Enter the full pathname of the local bin directory: [/usr/local/bin]: 
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Finished product-specific root actions.

[root@znode3 ~]# /u01/app/11.2.0/grid/crs/install/rootcrs.pl   
perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
        LANGUAGE = (unset),
        LC_ALL = (unset),
        LANG = "en_US.ZHS16GBK"
    are supported and installed on your system.
perl: warning: Falling back to the standard locale ("C").
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
User ignored Prerequisites during installation
Configure Oracle Grid Infrastructure for a Cluster ... succeeded

12,verify oracle database software installation
[oracle@znode3 ~]$ /u01/app/11.2.0/grid/bin/cluvfy stage -post nodeadd -n  znode3 -verbose
tip:maybe have some fail,I ignore!

13,Add a third database instance use dbca on first node
start X window on znode1,
$dbca
1,Select Oracle Real Application Clusters (RAC) database 
2,select instance management
3,select add instance
4,enter sys username and password
5,Verify the existing database instances
6,Verify the instance name new node,there is znode3
7,Verify the storage configuration
8,Verify the set up and click on OK

14,verify new node instance status
[grid@znode3 ~]$ srvctl status database -d rac
Instance rac1 is running on node znode1
Instance rac2 is running on node znode2
Instance rac3 is running on node znode3
[grid@znode3 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.CRSDG.dg
               ONLINE  ONLINE       znode1                                       
               ONLINE  ONLINE       znode2                                       
               ONLINE  ONLINE       znode3                                       
ora.DBDG.dg
               ONLINE  ONLINE       znode1                                       
               ONLINE  ONLINE       znode2                                       
               ONLINE  ONLINE       znode3                                       
ora.FLRV.dg
               ONLINE  ONLINE       znode1                                       
               ONLINE  ONLINE       znode2                                       
               ONLINE  ONLINE       znode3                                       
ora.LISTENER.lsnr
               ONLINE  ONLINE       znode1                                       
               ONLINE  ONLINE       znode2                                       
               ONLINE  ONLINE       znode3                                       
ora.asm
               ONLINE  ONLINE       znode1                   Started             
               ONLINE  ONLINE       znode2                   Started             
               ONLINE  ONLINE       znode3                   Started             
ora.gsd
               OFFLINE OFFLINE      znode1                                       
               OFFLINE OFFLINE      znode2                                       
               OFFLINE OFFLINE      znode3                                       
ora.net1.network
               ONLINE  ONLINE       znode1                                       
               ONLINE  ONLINE       znode2                                       
               ONLINE  ONLINE       znode3                                       
ora.ons
               ONLINE  ONLINE       znode1                                       
               ONLINE  ONLINE       znode2                                       
               ONLINE  ONLINE       znode3                                       
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       znode2                                       
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       znode3                                       
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       znode1                                       
ora.cvu
      1        ONLINE  ONLINE       znode1                                       
ora.oc4j
      1        ONLINE  ONLINE       znode1                                       
ora.rac.db
      1        ONLINE  ONLINE       znode1                   Open                
      2        ONLINE  ONLINE       znode2                   Open                
      3        ONLINE  ONLINE       znode3                   Open                
ora.scan1.vip
      1        ONLINE  ONLINE       znode2                                       
ora.scan2.vip
      1        ONLINE  ONLINE       znode3                                       
ora.scan3.vip
      1        ONLINE  ONLINE       znode1                                       
ora.znode1.vip
      1        ONLINE  ONLINE       znode1                                       
ora.znode2.vip
      1        ONLINE  ONLINE       znode2                                       
ora.znode3.vip
      1        ONLINE  ONLINE       znode3        

SQL> select thread#,instance,enabled from v$thread;

   THREAD# INSTANCE   ENABLED
---------- ---------- --------
         1 rac1       PUBLIC
         2 rac2       PUBLIC
         3 rac3       PUBLIC

SQL> select instance_name,status from gv$instance;

INSTANCE_NAME    STATUS
---------------- ------------
rac1             OPEN
rac3             OPEN
rac2             OPEN

SQL> select thread#,group#,sequence#,members,status from v$log;

   THREAD#     GROUP#  SEQUENCE#    MEMBERS STATUS
---------- ---------- ---------- ---------- ----------------
         1          1         43          2 CURRENT
         1          2         42          2 INACTIVE
         2          3         37          2 INACTIVE
         2          4         38          2 CURRENT
         3          5          1          2 INACTIVE
         3          6          2          2 CURRENT

6 rows selected.

ok,finished!

–oem config on znode3—

[oracle@znode3 bin]$ emca -reconfig dbcontrol -cluster

STARTED EMCA at Jul 30, 2012 5:26:43 PM
EM Configuration Assistant, Version 11.2.0.3.0 Production
Copyright (c) 2003, 2011, Oracle. All rights reserved.

Enter the following information:
Database unique name: rac
Service name: rac.anbob.com
Database Control node name (optional): znode1
Agent Node list [comma separated] (optional): znode3
Do you wish to continue? [yes(Y)/no(N)]: y
Jul 30, 2012 5:27:21 PM oracle.sysman.emcp.EMConfig perform
INFO: This operation is being logged at /u01/app/oracle/cfgtoollogs/emca/rac/emca_2012_07_30_17_26_42.log.
Jul 30, 2012 5:27:27 PM oracle.sysman.emcp.util.DBControlUtil stopOMS
INFO: Stopping Database Control (this may take a while) ...
Jul 30, 2012 5:27:30 PM oracle.sysman.emcp.EMAgentConfig performDbcReconfiguration
INFO: Propagating /u01/app/oracle/11.2.0/db1/znode3_rac/sysman/config/emd.properties to remote nodes ...
Jul 30, 2012 5:27:31 PM oracle.sysman.emcp.util.DBControlUtil startOMS
INFO: Starting Database Control (this may take a while) ...
Jul 30, 2012 5:27:55 PM oracle.sysman.emcp.EMDBPostConfig performDbcReconfiguration
INFO: Database Control started successfully
Jul 30, 2012 5:27:56 PM oracle.sysman.emcp.EMDBPostConfig showClusterDBCAgentMessage
INFO:
**************** Current Configuration ****************
INSTANCE NODE DBCONTROL_UPLOAD_HOST
---------- ---------- ---------------------

rac znode1 znode1.anbob.com
rac znode2 znode1.anbob.com
rac znode3 znode1.anbob.com

Enterprise Manager configuration completed successfully
FINISHED EMCA at Jul 30, 2012 5:27:56 PM

 

— 20190429 update —

for aix issue

1, add  Out Of Memory

$ ./addNode.sh -silent "CLUSTER_NEW_NODES={sr53db3}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={sr53db3-vip}"

Copying to remote nodes (Saturday, April 20, 2019 10:58:34 AM GMT+08:00)
..JVMDUMP006I Processing dump event "systhrow", detail "java/lang/OutOfMemoryError" - please wait.
JVMDUMP032I JVM requested Snap dump using '/oracle/app/11.2.0/grid/oui/bin/Snap.20190420.105839.23003220.0001.trc' in response to an event
JVMDUMP010I Snap dump written to /oracle/app/11.2.0/grid/oui/bin/Snap.20190420.105839.23003220.0001.trc
JVMDUMP006I Processing dump event "systhrow", detail "java/lang/OutOfMemoryError" - please wait.
JVMDUMP032I JVM requested Snap dump using '/oracle/app/11.2.0/grid/oui/bin/Snap.20190420.105839.23003220.0003.trc' in response to an event
JVMDUMP010I Snap dump written to /oracle/app/11.2.0/grid/oui/bin/Snap.20190420.105839.23003220.0003.trc
JVMDUMP032I JVM requested Heap dump using '/oracle/app/11.2.0/grid/oui/bin/heapdump.20190420.105839.23003220.0002.phd' in response to an event
JVMDUMP010I Heap dump written to /oracle/app/11.2.0/grid/oui/bin/heapdump.20190420.105839.23003220.0002.phd
JVMDUMP032I JVM requested Java dump using '/oracle/app/11.2.0/grid/oui/bin/javacore.20190420.105839.23003220.0005.txt' in response to an event
JVMDUMP010I Java dump written to /oracle/app/11.2.0/grid/oui/bin/javacore.20190420.105839.23003220.0005.txt
JVMDUMP013I Processed dump event "systhrow", detail "java/lang/OutOfMemoryError".
JVMDUMP032I JVM requested Heap dump using '/oracle/app/11.2.0/grid/oui/bin/heapdump.20190420.105839.23003220.0004.phd' in response to an event
JVMDUMP010I Heap dump written to /oracle/app/11.2.0/grid/oui/bin/heapdump.20190420.105839.23003220.0004.phd
JVMDUMP032I JVM requested Java dump using '/oracle/app/11.2.0/grid/oui/bin/javacore.20190420.105839.23003220.0006.txt' in response to an event
JVMDUMP010I Java dump written to /oracle/app/11.2.0/grid/oui/bin/javacore.20190420.105839.23003220.0006.txt
JVMDUMP013I Processed dump event "systhrow", detail "java/lang/OutOfMemoryError".
.SEVERE:Abnormal program termination. An internal error has occured. Please provide the following files to Oracle Support :

"/oracle/app/oraInventory/logs/addNodeActions2019-04-20_10-57-45AM.log"
"/oracle/app/oraInventory/logs/oraInstall2019-04-20_10-57-45AM.err"
"/oracle/app/oraInventory/logs/oraInstall2019-04-20_10-57-45AM.out"

Solution:
Please edit the file: $CRS_HOME/oui/oraparam.ini
and search for this line:

JRE_MEMORY_OPTIONS=" -mx150m"
Change this value:
JRE_MEMORY_OPTIONS=" -mx300m"
You may increase the value up to -mx512m if it fails again.

DEFAULT_HOME_LOCATION=
DEFAULT_HOME_NAME=
NLS_ENABLED=TRUE
JRE_MEMORY_OPTIONS=" -mx96m" ---我改到了512
NO_BROWSE=/net
BOOTSTRAP=TRUE
CLUSTERWARE={"oracle.crs","10.1.0.2.0"}

2, $ORACLE_BASE owner privs

Instantiating scripts for add node (Saturday, April 20, 2019 11:12:01 AM GMT+08:00)
.                                                                 1% Done.
Instantiation of add node scripts complete

Copying to remote nodes (Saturday, April 20, 2019 11:12:05 AM GMT+08:00)
...............................................................................................                                 96% Done.
Home copied to new nodes

Saving inventory on nodes (Saturday, April 20, 2019 12:46:45 PM GMT+08:00)
SEVERE:Remote 'AttachHome' failed on nodes: 'sr53db3'. Refer to '/oracle/app/oraInventory/logs/addNodeActions2019-04-20_11-11-34AM.log' for details.
You can manually re-run the following command on the failed nodes after the installation: 
 /oracle/app/11.2.0/grid/oui/bin/runInstaller -attachHome -noClusterEnabled ORACLE_HOME=/oracle/app/11.2.0/grid ORACLE_HOME_NAME=Ora11g_gridinfrahome1 CLUSTER_NODES=sr13db1,sr23db2,sr53db3 CRS=true  "INVENTORY_LOCATION=/oracle/app/oraInventory" -invPtrLoc "/oracle/app/11.2.0/grid/oraInst.loc" LOCAL_NODE=. 
Please refer 'AttachHome' logs under central inventory of remote nodes where failure occurred for more details.
SEVERE:Remote 'UpdateNodeList' failed on nodes: 'sr53db3'. Refer to '/oracle/app/oraInventory/logs/addNodeActions2019-04-20_11-11-34AM.log' for details.
You can manually re-run the following command on the failed nodes after the installation: 
 /oracle/app/11.2.0/grid/oui/bin/runInstaller -updateNodeList -noClusterEnabled ORACLE_HOME=/oracle/app/11.2.0/grid CLUSTER_NODES=sr13db1,sr23db2,sr53db3 CRS=true  "INVENTORY_LOCATION=/oracle/app/oraInventory" -invPtrLoc "/oracle/app/11.2.0/grid/oraInst.loc" LOCAL_NODE=. 
Please refer 'UpdateNodeList' logs under central inventory of remote nodes where failure occurred for more details.
.                                                               100% Done.
Save inventory complete
WARNING:OUI-10234:Failed to copy the root script, /oracle/app/oraInventory/orainstRoot.sh to the cluster nodes sr53db3.[Error in copying file '/oracle/app/oraInventory/orainstRoot.sh' present inside directory '/' on nodes 'sr53db3'. [PRKC-1080 : Failed to transfer file "/oracle/app/oraInventory/orainstRoot.sh" to any of the given nodes "sr53db3 ". 
Error on node sr53db3:null]]
 Please copy them manually to these nodes and execute the script.
WARNING:A new inventory has been created on one or more nodes in this session. However, it has not yet been registered as the central inventory of this system. 
To register the new inventory please run the script at '/oracle/app/oraInventory/orainstRoot.sh' with root privileges on nodes 'sr53db3'.
If you do not register the inventory, you may not be able to update or patch the products you installed.
The following configuration scripts need to be executed as the "root" user in each cluster node.
/oracle/app/oraInventory/orainstRoot.sh #On nodes sr53db3
/oracle/app/11.2.0/grid/root.sh #On nodes sr53db3
To execute the configuration scripts:
    1. Open a terminal window
    2. Log in as "root"
    3. Run the scripts in each cluster node
    
The Cluster Node Addition of /oracle/app/11.2.0/grid was successful.
Please check '/tmp/silentInstall.log' for more details.


Solution:
copy them manually

/oracle/app/11.2.0/grid/oui/bin/runInstaller -attachHome -noClusterEnabled ORACLE_HOME=/oracle/app/11.2.0/grid ORACLE_HOME_NAME=Ora11g_gridinfrahome1 CLUSTER_NODES=sr13db1,sr23db2,sr53db3 CRS=true "INVENTORY_LOCATION=/oracle/app/oraInventory" -invPtrLoc "/oracle/app/11.2.0/grid/oraInst.loc" LOCAL_NODE=sr53db3

/oracle/app/11.2.0/grid/oui/bin/runInstaller -updateNodeList -noClusterEnabled ORACLE_HOME=/oracle/app/11.2.0/grid CLUSTER_NODES=sr13db1,sr23db2,sr53db3 CRS=true "INVENTORY_LOCATION=/oracle/app/oraInventory" -invPtrLoc "/oracle/app/11.2.0/grid/oraInst.loc" LOCAL_NODE=sr53db3

/oracle/app/11.2.0/grid/oui/bin/runInstaller -attachHome -noClusterEnabled ORACLE_HOME=/oracle/app/11.2.0/grid ORACLE_HOME_NAME=Ora11g_gridinfrahome1 CLUSTER_NODES=sr13db1,sr23db2,sr53db3 CRS=true "INVENTORY_LOCATION=/oracle/app/oraInventory" -invPtrLoc "/oracle/app/11.2.0/grid/oraInst.loc" LOCAL_NODE=sr53db3

3, root.sh fail USER capabilities

sr53db3:/oracle/app>/oracle/app/11.2.0/grid/root.sh
Running Oracle 11g root.sh script...

The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /oracle/app/11.2.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
Creating /usr/local/bin directory…
Copying dbhome to /usr/local/bin …
Copying oraenv to /usr/local/bin …
Copying coraenv to /usr/local/bin …

Creating /etc/oratab file…
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
2019-04-20 13:40:53: Parsing the host name
2019-04-20 13:40:53: Checking for super user privileges
2019-04-20 13:40:53: User has super user privileges
Using configuration parameter file: /oracle/app/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
User grid is missing the following capabilities required to run CSSD in realtime:
CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE
To add the required capabilities, please run:
/usr/bin/chuser capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE grid
CSS cannot be run in realtime mode at /oracle/app/11.2.0/grid/crs/install/crsconfig_lib.pm line 8119.

Solution:
sr53db3:/oracle/app>id
uid=0(root) gid=0(system) groups=2(bin),3(sys),7(security),8(cron),10(audit),11(lp)

sr53db3:/oracle/app>chuser capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE grid
sr53db3:/oracle/app>chuser capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE oracle

4, root.sh rerun fail

sr53db3:/oracle/app>/oracle/app/11.2.0/grid/root.sh
Running Oracle 11g root.sh script...

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /oracle/app/11.2.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]: 
The file "dbhome" already exists in /usr/local/bin.  Overwrite it? (y/n) [n]: y
   Copying dbhome to /usr/local/bin ...
The file "oraenv" already exists in /usr/local/bin.  Overwrite it? (y/n) [n]: y
   Copying oraenv to /usr/local/bin ...
The file "coraenv" already exists in /usr/local/bin.  Overwrite it? (y/n) [n]: y
   Copying coraenv to /usr/local/bin ...

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
2019-04-20 13:43:35: Parsing the host name
2019-04-20 13:43:35: Checking for super user privileges
2019-04-20 13:43:35: User has super user privileges
Using configuration parameter file: /oracle/app/11.2.0/grid/crs/install/crsconfig_params
Improper Oracle Clusterware configuration found on this host
Deconfigure the existing cluster configuration before starting
to configure a new Clusterware 
run '/oracle/app/11.2.0/grid/crs/install/rootcrs.pl -deconfig' 
to configure existing failed configuration and then rerun root.sh

sr53db3:/oracle/app>/oracle/app/11.2.0/grid/crs/install/rootcrs.pl -deconfig 
2019-04-20 13:49:32: Parsing the host name
2019-04-20 13:49:32: Checking for super user privileges
2019-04-20 13:49:32: User has super user privileges
Using configuration parameter file: /oracle/app/11.2.0/grid/crs/install/crsconfig_params
Oracle Clusterware stack is not active on this node
Restart the clusterware stack (use /oracle/app/11.2.0/grid/bin/crsctl start crs) and retry
Failed to verify resources

Solution:
sr53db3:/oracle/app>/oracle/app/11.2.0/grid/crs/install/rootcrs.pl -deconfig -force
2019-04-20 14:01:13: Parsing the host name
2019-04-20 14:01:13: Checking for super user privileges
2019-04-20 14:01:13: User has super user privileges
Using configuration parameter file: /oracle/app/11.2.0/grid/crs/install/crsconfig_params
PRCR-1035 : Failed to look up CRS resource ora.cluster_vip.type for 1
PRCR-1068 : Failed to query resources
Cannot communicate with crsd
PRCR-1070 : Failed to check if resource ora.gsd is registered
Cannot communicate with crsd
PRCR-1070 : Failed to check if resource ora.ons is registered
Cannot communicate with crsd
PRCR-1070 : Failed to check if resource ora.eons is registered
Cannot communicate with crsd

Failure at scls_scr_setval with code 8
Internal Error Information:
Category: -2
Operation: failed
Location: scrsearch3
Other: id doesnt exist scls_scr_setval
System Dependent Information: 2

CRS-4544: Unable to connect to OHAS
CRS-4000: Command Stop failed, or completed with errors.
— wait time —
if you truss the process, will be show , it take long time for chmod audit trace…
strongly recommend to remove audit trace file add node before

re run /oracle/app>/oracle/app/11.2.0/grid/root.sh

5, storage miss reserve_policy

after run root.sh, will be start crs auto on new nodes. then another node reboot issue

[cssd(12517510)]CRS-1649:An I/O error occured for voting file: /dev/rhdisk2; details at (:CSSNM00059:) in /oracle/app/11.2.0/grid/log/sr23db2/cssd/ocssd.log.
2019-04-20 15:29:55.442
[cssd(12517510)]CRS-1649:An I/O error occured for voting file: /dev/rhdisk2; details at (:CSSNM00060:) in /oracle/app/11.2.0/grid/log/sr23db2/cssd/ocssd.log.
2019-04-20 15:29:59.512
[cssd(12517510)]CRS-1649:An I/O error occured for voting file: /dev/rhdisk2; details at (:CSSNM00060:) in /oracle/app/11.2.0/grid/log/sr23db2/cssd/ocssd.log.

Solution:

lsattr -E -l hdisk1| grep reserve

chdev -l hdisk1 -a reserve_policy=no_reserve
chdev -l hdisk2 -a reserve_policy=no_reserve
chdev -l hdisk3 -a reserve_policy=no_reserve
chdev -l hdisk6 -a reserve_policy=no_reserve

for i in {2..7}; do chdev -l hdisk$i -a reserve_policy=no_reserve; done

The response is either a reserve_lock setting, or a reserve_policy setting. If
the attribute is reserve_lock, then ensure that the setting is reserve_lock =
no. If the attribute is reserve_policy, then ensure that the setting is reserve_
policy = no_reserve.


PRCF-2002 : Connection to node “” is lost

Cause:
bug 19730916

Sulution:
/etc/ssh/sshd_config, change the MaxStartups (e.g. to ‘MaxStartups 40:30:60’ to allow 40 concurrent connections) , 重启sshd, 不过建议调到100.

MaxStartups 默认是10:30:60 “start:rate:full”,表示最大并发10个如果超过后会有rate/100% 可能性失败,如果超过 full 60时会肯定失败。但是只是限制未认证的连接。
MaxStartups:
Specifies the maximum number of concurrent unauthenticated connections to the SSH daemon. Additional connections will be dropped until authentication succeeds or the LoginGraceTime expires for a connection. The default is 10:30:100.

Related Posts:

打赏

, ,

对不起,这篇文章暂时关闭评论。