Home » Server Options » RAC & Failsafe » root.sh is hanging on second node (Oracle 10g,10.2.0.1 OEL 5.4 32 bit)
root.sh is hanging on second node [message #484401] Tue, 30 November 2010 05:51
ranvijaidba
Messages: 71
Registered: May 2008
Location: Bangalore
Member
Operating system: OEL5.4 32 bit
using Citrix Xen Vmware

I am new to oracle RAC. I am trying to install RAC on Citrix Xen Vmware.

I have create three virtual machine name

nfsstorage
RAC1
RAC2

I mounted nfstorage into RAC1 and RAC2.

on RAC1:

Filesystem Size Used Avail Use% Mounted on
/dev/xvda3 44G 3.3G 38G 8% /
/dev/xvda1 99M 13M 82M 14% /boot
tmpfs 751M 0 751M 0% /dev/shm
nfsstofrage:/storage 20G 1.3G 17G 7% /storage1
nfsstofrage:/share2 9.3G 1.2G 7.7G 14% /storage2

On RAC2

[oracle@rac2 storage1]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda3 44G 3.2G 39G 8% /
/dev/xvda1 99M 13M 82M 14% /boot
tmpfs 751M 0 751M 0% /dev/shm
nfsstofrage:/storage 20G 1.3G 17G 7% /storage1
nfsstofrage:/share2 9.3G 1.2G 7.7G 14% /storage2


when i am trying to install Oracle CRS on RAC1 it is failing on step to execute root.sh on second node RAC2

on first node RAC1 it is completed successfully

[root@rac1 db_1]# ./root.sh
WARNING: directory '/u01/app/oracle/product/10.2.0' is not owned by root
WARNING: directory '/u01/app/oracle/product' is not owned by root
WARNING: directory '/u01/app/oracle' is not owned by root
WARNING: directory '/u01/app' is not owned by root
WARNING: directory '/u01' is not owned by root
Checking to see if Oracle CRS stack is already configured
/etc/oracle does not exist. Creating it now.

Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/u01/app/oracle/product/10.2.0' is not owned by root
WARNING: directory '/u01/app/oracle/product' is not owned by root
WARNING: directory '/u01/app/oracle' is not owned by root
WARNING: directory '/u01/app' is not owned by root
WARNING: directory '/u01' is not owned by root
assigning default hostname rac1 for node 1.
assigning default hostname rac2 for node 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: rac1 rac1-priv rac1
node 2: rac2 rac2-priv rac2
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Now formatting voting device: /storage1/voting/voting_disk
Format of 1 voting devices complete.
Startup will be queued to init within 90 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
rac1
CSS is inactive on these nodes.
rac2
Local node checking complete.
Run root.sh on remaining nodes to start CRS daemons.
[root@rac1 db_1]#

but on second node RAC2 it is hanging

[root@rac2 db_1]# ./root.sh
WARNING: directory '/u01/app/oracle/product/10.2.0' is not owned by root
WARNING: directory '/u01/app/oracle/product' is not owned by root
WARNING: directory '/u01/app/oracle' is not owned by root
WARNING: directory '/u01/app' is not owned by root
WARNING: directory '/u01' is not owned by root
Checking to see if Oracle CRS stack is already configured

Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/u01/app/oracle/product/10.2.0' is not owned by root
WARNING: directory '/u01/app/oracle/product' is not owned by root
WARNING: directory '/u01/app/oracle' is not owned by root
WARNING: directory '/u01/app' is not owned by root
WARNING: directory '/u01' is not owned by root
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
assigning default hostname rac1 for node 1.
assigning default hostname rac2 for node 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: rac1 rac1-priv rac1
node 2: rac2 rac2-priv rac2
clscfg: Arguments check out successfully.

NO KEYS WERE WRITTEN. Supply -force parameter to override.
-force is destructive and will destroy any previous cluster
configuration.
Oracle Cluster Registry for cluster has already been initialized
Startup will be queued to init within 90 seconds.

when i check log file ocrconfig_3882.log

it is showing this error
2010-11-30 16:00:33.512: [ OCROSD][1353408]utread:3: problem reading buffer 9abc000 buflen 512 retval 0 phy_offset 102400 retry 0
2010-11-30 16:00:33.512: [ OCROSD][1353408]utread:4: problem reading the buffer errno 2 errstring No such file or directory
2010-11-30 16:00:33.512: [ OCROSD][1353408]utread:3: problem reading buffer 9abc000 buflen 4096 retval 0 phy_offset 102400 retry 0
2010-11-30 16:00:33.512: [ OCROSD][1353408]utread:4: problem reading the buffer errno 2 errstring No such file or directory
2010-11-30 16:00:33.513: [ OCRRAW][1353408]propriogid:1: INVALID FORMAT
2010-11-30 16:00:33.570: [ OCRRAW][1353408]propriowv: Vote information on disk 0 [/storage1/crs/crs_configuration] is adjusted from [0/0] to [2/2]
2010-11-30 16:00:33.950: [ OCRRAW][1353408]propriniconfig:No 92 configuration
2010-11-30 16:00:33.951: [ OCRAPI][1353408]a_init:6a: Backend init successful
2010-11-30 16:00:35.507: [ OCRCONF][1353408]Initialized DATABASE keys in OCR
2010-11-30 16:00:35.579: [ OCRCONF][1353408]Successfully set skgfr block 0
2010-11-30 16:00:35.579: [ OCRCONF][1353408]Exiting [status=success]...

in css.log this message is recorded

2010-11-30 16:02:58.378: [ CSSCLNT][11184336]clsssInitNative: connect failed, rc 9

2010-11-30 16:02:59.390: [ CSSCLNT][11184336]clsssInitNative: connect failed, rc 9

2010-11-30 16:03:00.402: [ CSSCLNT][11184336]clsssInitNative: connect failed, rc 9

2010-11-30 16:03:01.414: [ CSSCLNT][11184336]clsssInitNative: connect failed, rc 9

2010-11-30 16:03:02.426: [ CSSCLNT][11184336]clsssInitNative: connect failed, rc 9


Please tell me where i am wrong.

Thanks in advance

Previous Topic: RPM for Oracle RAC installation
Next Topic: Listing hidden parameters for all RAC instances
Goto Forum:
  


Current Time: Thu Mar 28 14:40:45 CDT 2024