From satoshi.satou @ aist.go.jp Mon Jul 3 19:09:02 2006 From: satoshi.satou @ aist.go.jp (Satoshi Sato) Date: Mon, 03 Jul 2006 19:09:02 +0900 (JST) Subject: [SCore-users-jp] SRAM parity error Message-ID: <20060703.190902.30604622.satoshi@soum.co.jp> お世話になります。産総研 グリッド研究センターの佐藤と申します。 SCore での SRAM の parity error チェックについて教えて下さい。 ■ 概要 SCore では、ボード上の SRAM の parity error チェックはどうやっている のでしょうか? ■ 詳細 先日まで当センターの Myrinet クラスタで SCore を使っておりました。 最近、MPICH/SCore から Myricom が提供している MPICH-MX に移行したので すが、下記のようなエラーが数ノードで記録されました。 LANai[0]: *** MCP fatal error MX_MCP_LANAI_PARITY_ERROR at (../../mx-1.1.3/mcp/misc.c, 274) mx WARN: Board number 0 marked dead mx WARN: firmware dead on board 0, ignoring ioctl mx WARN: mx0: Failed to close endpoint 1 on mcp mx WARN: firmware dead on board 0, ignoring ioctl mx WARN: mx0: Failed to close endpoint 0 on mcp このエラーについて Myricom 社に問い合わせたところ、 ・SRAM の parity ERROR が発生した場合、Lanai SRAM のどこかで ERROR が 起こったことは分かるが、どこで起こったかは分からないので計算を止めて いる との事でした。 SCore を使っている時は、parity ERROR で計算が止まるような事はなかった ので、SCore ではうまいこと回避していたのだろうか、と思いました。 宜しくお願いいたします。 .oOo.________________________.oOo. 佐藤 聡 From kameyama @ pccluster.org Mon Jul 3 19:45:10 2006 From: kameyama @ pccluster.org (Kameyama Toyohisa) Date: Mon, 03 Jul 2006 19:45:10 +0900 Subject: [SCore-users-jp] SRAM parity error In-Reply-To: <20060703.190902.30604622.satoshi@soum.co.jp> References: <20060703.190902.30604622.satoshi@soum.co.jp> Message-ID: <44A8F536.9020707@pccluster.org> 亀山です. Satoshi Sato wrote: > ■ 概要 > > SCore では、ボード上の SRAM の parity error チェックはどうやっている > のでしょうか? すみません. チェックしていませんでした... http://www.myri.com/vlsi/LanaiX.Rev1.1.pdf によると, ISR の 22 bit 目で識別できるようなので, % cat /proc/pm/myrinet/0/info で ISR の値を見ることで調べることはできますが... Kameyama Toyohisa From s-sumi @ flab.fujitsu.co.jp Thu Jul 13 16:44:44 2006 From: s-sumi @ flab.fujitsu.co.jp (Shinji Sumimoto) Date: Thu, 13 Jul 2006 16:44:44 +0900 (JST) Subject: [SCore-users-jp] Re: [SCore-users] Communication Performance with Score, with and without network trunking In-Reply-To: <000001c6a589$51961250$ca4315ac@spc0608> References: <000001c6a589$51961250$ca4315ac@spc0608> Message-ID: <20060713.164444.116359402.s-sumi@flab.fujitsu.co.jp> Hi. Could you show the results of communication performance RTT and bandwidth using rpmtest? http://www.pccluster.org/score/dist/score/html/en/installation/pm-testethernet.html RTT can be measured by combinations of "rpmtest -reply" and "rpmtest -ping". Bandwidth can be measured by combinations of "rpmtest -sink" and "rpmtest -burst". PS: How do you defined the e1000 device driver parameters on /etc/modprobe.conf? The default parameters of the e1000 device driver is really bad for PM/Ethernet. Thank you for using SCore and network trunking. Shinji. From: "Viet" Subject: [SCore-users] Communication Performance with Score, with and without network trunking Date: Wed, 12 Jul 2006 17:01:02 +0900 Message-ID: <000001c6a589$51961250$ca4315ac @ spc0608> viet> Dear Users, viet> viet> We have installed Score on a cluster of 2 nodes. We want to increase viet> communication speed with network trunking with a dual port NIC. viet> Howerver, we cannot get a good communication performance with Score. viet> viet> Tested application: exchanging 16000 byte long messages between 2 nodes viet> (in fact, we performed the experiments with several message sizes and viet> show here only results of 16000 byte long, test result for other sizes viet> are in a similar pattern: single port no Score > dual port channel viet> bonding no Score > Score network trunking > Score single port) viet> viet> Exchange bandwidth: viet> without Score + single port: 950 Mbps viet> + dual port, channel bonding (Linux, FC3): 610 viet> Mbps viet> with Score + single port: 415 Mbps viet> + dual port: 430 Mbps viet> viet> Do I have any mistake in Score configuration? viet> viet> Hardware platform: viet> + Dell PowerEdge 1850 viet> + 2 Dual-core Intel Xeon 2.8GHz processors viet> + 4 GB Memory viet> + Intel PRO 1000 PT Dual Gigabit Ethernet Adapter (PCI-Express x4) viet> + Fedora-core 3 for x86-64 viet> + Score installed by source viet> + Hostnames: ypc0515.jp and ypc0516.jp, viet> + ypc0515 as server and compute host. viet> viet> viet> Score configuration files are as follows: viet> 1.scorehosts.db: viet> /* PM/Ethernet */ viet> ethernet0 type=ethernet \ viet> -config:file=/opt/score/etc/pm-ethernet-0.conf viet> ethernet1 type=ethernet \ viet> -config:file=/opt/score/etc/pm-ethernet-1.conf viet> ethernet-x2 type=ethernet \ viet> -config:file=/opt/score/etc/pm-ethernet-1.conf \ viet> -trunk0:file=/opt/score/etc/pm-ethernet-0.conf viet> shmem0 type=shmem -node=0 viet> shmem1 type=shmem -node=1 viet> shmem2 type=shmem -node=2 viet> shmem3 type=shmem -node=3 viet> viet> /*Macro to difine a host */ viet> #define PCC msgbserv=(ypc0515.jp:8764) \ viet> cpugen=XeonEMT speed=2800 smp=4 \ viet> network=ethernet-x2,shmem0,shmem1,shmem2,shmem3 \ viet> group=pcc viet> ypc0515.jp PCC viet> ypc0516.jp PCC viet> (for network trunking test, and network=ethernet0 for single port test) viet> viet> viet> 2.pm-ethernet-0.conf: viet> unit 0 viet> maxnsend 32 viet> backoff 1000 viet> checksum 0 viet> # PE MAC address base hostname # comment viet> 0 00:0E:0C:B3:BD:90 ypc0515.jp # ip=10.0.5.150 on eth0 viet> 1 00:0E:0C:B4:18:CA ypc0516.jp # ip=10.0.5.160 on eth0 viet> viet> 3.pm-ethernet-1.conf: viet> unit 1 viet> maxnsend 32 viet> backoff 1000 viet> checksum 0 viet> # PE MAC address base hostname # comment viet> 0 00:0E:0C:B3:BD:91 ypc0515.jp # on eth1 viet> 1 00:0E:0C:B4:18:CB ypc0516.jp # on eth1 viet> viet> Thank you very much for your help. viet> viet> Ta Quoc Viet viet> viet> viet> _______________________________________________ viet> SCore-users mailing list viet> SCore-users @ pccluster.org viet> http://www.pccluster.org/mailman/listinfo/score-users ------ Shinji Sumimoto, Fujitsu Labs _______________________________________________ SCore-users mailing list SCore-users @ pccluster.org http://www.pccluster.org/mailman/listinfo/score-users From s-sumi @ flab.fujitsu.co.jp Thu Jul 13 18:35:36 2006 From: s-sumi @ flab.fujitsu.co.jp (Shinji Sumimoto) Date: Thu, 13 Jul 2006 18:35:36 +0900 (JST) Subject: [SCore-users-jp] Re: [SCore-users] Communication Performance with Score, with and without network trunking In-Reply-To: <000001c6a65d$e6335150$ca4315ac@spc0608> References: <20060713.164444.116359402.s-sumi@flab.fujitsu.co.jp> <000001c6a65d$e6335150$ca4315ac@spc0608> Message-ID: <20060713.183536.10300241.s-sumi@flab.fujitsu.co.jp> Dear Viet # Cc: to score-user ML for sharing information. From: "Viet" Subject: RE: [SCore-users] Communication Performance with Score, with and without network trunking Date: Thu, 13 Jul 2006 18:22:45 +0900 Message-ID: <000001c6a65d$e6335150$ca4315ac @ spc0608> viet> Dear Shinji Sumimoto Sensei, viet> viet> Thank you very much for your quick reply. viet> viet> >Could you show the results of communication performance RTT viet> >and bandwidth using rpmtest? viet> Rpmtest -reply and rpmtest -ping result: viet> 8 0.00102167 sec That's too bad. It should be 20-30 usec. viet> Rpmtest -sink and rpmtest -burst result: viet> 8 100660 Could you try "rpmtest -burst" with -len 1400 (if you are using JUMBO Frame -len 8800)? These results show maximum bandwidth performance. viet> >PS: How do you defined the e1000 device driver parameters on viet> > /etc/modprobe.conf? The default parameters of the e1000 device viet> > driver is really bad for PM/Ethernet. viet> I did not change the modprobe.conf file. Its content is as follows: viet> alias eth0 e1000 viet> alias eth1 e1000 Could you try adding following options to modprobe.conf and reload e1000 device driver or reboot the nodes? ============================================ options e1000 TxIntDelay=0,0 TxAbsIntDelay=0,0 RxIntDelay=0,0 RxAbsIntDelay=0,0 InterruptThrottleRate=0,0 TxDescriptors=512,512 RxDescriptors=512,512 ============================================ Shinji. viet> Thank you for your kind support. viet> viet> Viet viet> viet> viet> viet> >Hi. viet> > viet> >Could you show the results of communication performance RTT viet> >and bandwidth using rpmtest? viet> > viet> >http://www.pccluster.org/score/dist/score/html/en/installation/ viet> >pm-testethernet.html viet> > viet> >RTT can be measured by combinations of "rpmtest -reply" and viet> >"rpmtest -ping". Bandwidth can be measured by combinations of viet> >"rpmtest -sink" and "rpmtest -burst". viet> > viet> >PS: How do you defined the e1000 device driver parameters on viet> > /etc/modprobe.conf? The default parameters of the e1000 device viet> > driver is really bad for PM/Ethernet. viet> > viet> >Thank you for using SCore and network trunking. viet> > viet> >Shinji. viet> > viet> viet> ------ Shinji Sumimoto, Fujitsu Labs _______________________________________________ SCore-users mailing list SCore-users @ pccluster.org http://www.pccluster.org/mailman/listinfo/score-users From s-sumi @ flab.fujitsu.co.jp Thu Jul 13 19:47:53 2006 From: s-sumi @ flab.fujitsu.co.jp (Shinji Sumimoto) Date: Thu, 13 Jul 2006 19:47:53 +0900 (JST) Subject: [SCore-users-jp] Re: [SCore-users] Communication Performance with Score, with and without network trunking In-Reply-To: <000001c6a667$3b2ec140$ca4315ac@spc0608> References: <20060713.183536.10300241.s-sumi@flab.fujitsu.co.jp> <000001c6a667$3b2ec140$ca4315ac@spc0608> Message-ID: <20060713.194753.26530206.s-sumi@flab.fujitsu.co.jp> Dear Viet. From: "Viet" Subject: RE: [SCore-users] Communication Performance with Score, with and without network trunking Date: Thu, 13 Jul 2006 19:29:33 +0900 Message-ID: <000001c6a667$3b2ec140$ca4315ac @ spc0608> viet> Dear Shinji Sumimoto Sensei, viet> viet> >options e1000 TxIntDelay=0,0 TxAbsIntDelay=0,0 RxIntDelay=0,0 viet> >RxAbsIntDelay=0,0 InterruptThrottleRate=0,0 viet> >TxDescriptors=512,512 RxDescriptors=512,512 viet> viet> I added that line into modprobe.conf before the line "alias eth0 e1000" viet> and reboot the system. Usually, the options are added to after "alias eth1 e1000". viet> Hereunder is the result with network trunking (ethernet-x2): viet> viet> Before reboot (without changing modprobe.conf) viet> + rpmtest -ping 8 0.00102167 viet> + rpmtest -burst 8 100660 viet> (-len 1400) 1400 1.75823 e+07 (is it bit per viet> second? If yes, this is really a good bandwidth for dual Gigabit ports) viet> viet> After reboot: viet> + rpmtest -ping 8 0.00101375 (still too bad) viet> + rpmtest -burst 8 213129 (better) viet> (-len 1400) 1400 1.74813 e+07 viet> viet> And I still cannot get good communication performance with my own viet> application. It seems that the option is not able to take effect. When the options take effect, following messages are output to syslog or dmesg command output. ======================================================================== e1000: eth0: e1000_probe: Intel(R) PRO/1000 Network Connection ACPI: PCI Interrupt 0000:06:01.1[B] -> GSI 73 (level, low) -> IRQ 50 e1000: 0000:00:01.1: e1000_validate_option: Transmit Descriptors set to 512 e1000: 0000:00:01.1: e1000_validate_option: Receive Descriptors set to 512 e1000: 0000:00:01.1: e1000_validate_option: Transmit Interrupt Delay set to 256 e1000: 0000:00:01.1: e1000_validate_option: Transmit Absolute Interrupt Delay set to 256 e1000: 0000:00:01.1: e1000_validate_option: Receive Interrupt Delay set to 0 e1000: 0000:00:01.1: e1000_validate_option: Receive Absolute Interrupt Delay set to 10 e1000: 0000:00:01.1: e1000_check_options: Interrupt Throttling Rate (ints/sec) turned off ======================================================================== Can you see the messages? viet> One more question, please. With Score MPICH, a large message (say, 2200 viet> double, i.e. 2200x8 bye length) cannot be sent with MPI_Send (blocking viet> send). I tried to set P4_GLOBMEMSIZE variable to a large value but it viet> does not work. Could you please give some advice. There is no limitation that only 16KB message. Shinji. viet> Viet viet> viet> viet> viet> _______________________________________________ viet> SCore-users mailing list viet> SCore-users @ pccluster.org viet> http://www.pccluster.org/mailman/listinfo/score-users ------ Shinji Sumimoto, Fujitsu Labs _______________________________________________ SCore-users mailing list SCore-users @ pccluster.org http://www.pccluster.org/mailman/listinfo/score-users From viet @ sowa.is.uec.ac.jp Thu Jul 13 19:29:33 2006 From: viet @ sowa.is.uec.ac.jp (Viet) Date: Thu, 13 Jul 2006 19:29:33 +0900 Subject: [SCore-users-jp] RE: [SCore-users] Communication Performance with Score, with and without network trunking In-Reply-To: <20060713.183536.10300241.s-sumi@flab.fujitsu.co.jp> Message-ID: <000001c6a667$3b2ec140$ca4315ac@spc0608> Dear Shinji Sumimoto Sensei, >options e1000 TxIntDelay=0,0 TxAbsIntDelay=0,0 RxIntDelay=0,0 >RxAbsIntDelay=0,0 InterruptThrottleRate=0,0 >TxDescriptors=512,512 RxDescriptors=512,512 I added that line into modprobe.conf before the line "alias eth0 e1000" and reboot the system. Hereunder is the result with network trunking (ethernet-x2): Before reboot (without changing modprobe.conf) + rpmtest -ping 8 0.00102167 + rpmtest -burst 8 100660 (-len 1400) 1400 1.75823 e+07 (is it bit per second? If yes, this is really a good bandwidth for dual Gigabit ports) After reboot: + rpmtest -ping 8 0.00101375 (still too bad) + rpmtest -burst 8 213129 (better) (-len 1400) 1400 1.74813 e+07 And I still cannot get good communication performance with my own application. One more question, please. With Score MPICH, a large message (say, 2200 double, i.e. 2200x8 bye length) cannot be sent with MPI_Send (blocking send). I tried to set P4_GLOBMEMSIZE variable to a large value but it does not work. Could you please give some advice. Viet _______________________________________________ SCore-users mailing list SCore-users @ pccluster.org http://www.pccluster.org/mailman/listinfo/score-users From viet @ sowa.is.uec.ac.jp Thu Jul 13 20:07:37 2006 From: viet @ sowa.is.uec.ac.jp (Viet) Date: Thu, 13 Jul 2006 20:07:37 +0900 Subject: [SCore-users-jp] RE: [SCore-users] Communication Performance with Score, with and without network trunking In-Reply-To: <20060713.194753.26530206.s-sumi@flab.fujitsu.co.jp> Message-ID: <000001c6a66c$8ccde3a0$ca4315ac@spc0608> Dear Sensei, >Can you see the messages? Yes. I think these options already take effect. Viet [root @ ypc0515 ~]# dmesg | grep e1000 e1000: 0000:05:04.0: e1000_validate_option: Transmit Descriptors set to 512 e1000: 0000:05:04.0: e1000_validate_option: Receive Descriptors set to 512 e1000: 0000:05:04.0: e1000_validate_option: Transmit Interrupt Delay set to 0 e1000: 0000:05:04.0: e1000_validate_option: Transmit Absolute Interrupt Delay set to 0 e1000: 0000:05:04.0: e1000_validate_option: Receive Interrupt Delay set to 0 e1000: 0000:05:04.0: e1000_validate_option: Receive Absolute Interrupt Delay set to 0 e1000: 0000:05:04.0: e1000_check_options: Interrupt Throttling Rate (ints/sec) turned off e1000: eth0: e1000_probe: Intel(R) PRO/1000 Network Connection e1000: 0000:05:04.1: e1000_validate_option: Transmit Descriptors set to 512 e1000: 0000:05:04.1: e1000_validate_option: Receive Descriptors set to 512 e1000: 0000:05:04.1: e1000_validate_option: Transmit Interrupt Delay set to 0 e1000: 0000:05:04.1: e1000_validate_option: Transmit Absolute Interrupt Delay set to 0 e1000: 0000:05:04.1: e1000_validate_option: Receive Interrupt Delay set to 0 e1000: 0000:05:04.1: e1000_validate_option: Receive Absolute Interrupt Delay set to 0 e1000: 0000:05:04.1: e1000_check_options: Interrupt Throttling Rate (ints/sec) turned off e1000: eth1: e1000_probe: Intel(R) PRO/1000 Network Connection e1000: eth2: e1000_probe: Intel(R) PRO/1000 Network Connection e1000: eth3: e1000_probe: Intel(R) PRO/1000 Network Connection e1000: eth0: e1000_watchdog: NIC Link is Up 1000 Mbps Full Duplex e1000: eth1: e1000_watchdog: NIC Link is Up 1000 Mbps Full Duplex _______________________________________________ SCore-users mailing list SCore-users @ pccluster.org http://www.pccluster.org/mailman/listinfo/score-users From s-sumi @ flab.fujitsu.co.jp Thu Jul 13 20:17:35 2006 From: s-sumi @ flab.fujitsu.co.jp (Shinji Sumimoto) Date: Thu, 13 Jul 2006 20:17:35 +0900 (JST) Subject: [SCore-users-jp] Re: [SCore-users] Communication Performance with Score, with and without network trunking In-Reply-To: <000001c6a66c$8ccde3a0$ca4315ac@spc0608> References: <20060713.194753.26530206.s-sumi@flab.fujitsu.co.jp> <000001c6a66c$8ccde3a0$ca4315ac@spc0608> Message-ID: <20060713.201735.26970316.s-sumi@flab.fujitsu.co.jp> Dear Viet. I see.. Are you using NAPI option when you are building device driver? Could you show the results of the following command? % dmesg |grep -i intel If you are using NAPI, could you re-build the e1000 driver without NAPI option using "make menuconfig" command? Shinji. From: "Viet" Subject: RE: [SCore-users] Communication Performance with Score, with and without network trunking Date: Thu, 13 Jul 2006 20:07:37 +0900 Message-ID: <000001c6a66c$8ccde3a0$ca4315ac @ spc0608> viet> Dear Sensei, viet> viet> >Can you see the messages? viet> Yes. I think these options already take effect. viet> viet> Viet viet> viet> [root @ ypc0515 ~]# dmesg | grep e1000 viet> e1000: 0000:05:04.0: e1000_validate_option: Transmit Descriptors set to viet> 512 viet> e1000: 0000:05:04.0: e1000_validate_option: Receive Descriptors set to viet> 512 viet> e1000: 0000:05:04.0: e1000_validate_option: Transmit Interrupt Delay set viet> to 0 viet> e1000: 0000:05:04.0: e1000_validate_option: Transmit Absolute Interrupt viet> Delay set to 0 viet> e1000: 0000:05:04.0: e1000_validate_option: Receive Interrupt Delay set viet> to 0 viet> e1000: 0000:05:04.0: e1000_validate_option: Receive Absolute Interrupt viet> Delay set to 0 viet> e1000: 0000:05:04.0: e1000_check_options: Interrupt Throttling Rate viet> (ints/sec) turned off viet> e1000: eth0: e1000_probe: Intel(R) PRO/1000 Network Connection viet> e1000: 0000:05:04.1: e1000_validate_option: Transmit Descriptors set to viet> 512 viet> e1000: 0000:05:04.1: e1000_validate_option: Receive Descriptors set to viet> 512 viet> e1000: 0000:05:04.1: e1000_validate_option: Transmit Interrupt Delay set viet> to 0 viet> e1000: 0000:05:04.1: e1000_validate_option: Transmit Absolute Interrupt viet> Delay set to 0 viet> e1000: 0000:05:04.1: e1000_validate_option: Receive Interrupt Delay set viet> to 0 viet> e1000: 0000:05:04.1: e1000_validate_option: Receive Absolute Interrupt viet> Delay set to 0 viet> e1000: 0000:05:04.1: e1000_check_options: Interrupt Throttling Rate viet> (ints/sec) turned off viet> e1000: eth1: e1000_probe: Intel(R) PRO/1000 Network Connection viet> e1000: eth2: e1000_probe: Intel(R) PRO/1000 Network Connection viet> e1000: eth3: e1000_probe: Intel(R) PRO/1000 Network Connection viet> e1000: eth0: e1000_watchdog: NIC Link is Up 1000 Mbps Full Duplex viet> e1000: eth1: e1000_watchdog: NIC Link is Up 1000 Mbps Full Duplex viet> viet> viet> viet> _______________________________________________ viet> SCore-users mailing list viet> SCore-users @ pccluster.org viet> http://www.pccluster.org/mailman/listinfo/score-users ------ Shinji Sumimoto, Fujitsu Labs _______________________________________________ SCore-users mailing list SCore-users @ pccluster.org http://www.pccluster.org/mailman/listinfo/score-users From hori @ allinea.com Thu Jul 13 20:48:07 2006 From: hori @ allinea.com (Atsushi HORI) Date: Thu, 13 Jul 2006 20:48:07 +0900 Subject: [SCore-users-jp] Re: [SCore-users] Communication Performance with Score, with and without network trunking In-Reply-To: <000001c6a667$3b2ec140$ca4315ac@spc0608> References: <000001c6a667$3b2ec140$ca4315ac@spc0608> Message-ID: Hi, Viet, Here is another support. On 2006/07/13, at 19:29, Viet wrote: > One more question, please. With Score MPICH, a large message (say, > 2200 > double, i.e. 2200x8 bye length) cannot be sent with MPI_Send (blocking > send). I tried to set P4_GLOBMEMSIZE variable to a large value but it > does not work. Could you please give some advice. How do you know "cannot be sent with MPI_Send" ? What happens ? MPICH-SCore is not using p4. So any environment variables related to p4 do not work. _______________________________________________ SCore-users mailing list SCore-users @ pccluster.org http://www.pccluster.org/mailman/listinfo/score-users From hori @ allinea.com Thu Jul 13 21:11:41 2006 From: hori @ allinea.com (Atsushi HORI) Date: Thu, 13 Jul 2006 21:11:41 +0900 Subject: [SCore-users-jp] Re: [SCore-users] Communication Performance with Score, with and without network trunking In-Reply-To: <000001c6a673$ace22730$ca4315ac@spc0608> References: <000001c6a673$ace22730$ca4315ac@spc0608> Message-ID: On 2006/07/13, at 20:58, Viet wrote: > After each succesful send-receive action, message size will be > added up. > With our Score environment, the application stuck at size=2050 double > precision (about 16KB). This sounds like your test program is not huge. May I see it ? _______________________________________________ SCore-users mailing list SCore-users @ pccluster.org http://www.pccluster.org/mailman/listinfo/score-users From viet @ sowa.is.uec.ac.jp Thu Jul 13 20:28:39 2006 From: viet @ sowa.is.uec.ac.jp (Viet) Date: Thu, 13 Jul 2006 20:28:39 +0900 Subject: [SCore-users-jp] RE: [SCore-users] Communication Performance with Score, with and without network trunking In-Reply-To: <20060713.201735.26970316.s-sumi@flab.fujitsu.co.jp> Message-ID: <000001c6a66f$7d339130$ca4315ac@spc0608> Dear Sensei, "dmesg | grep -i intel" result: CPU0: Intel(R) Xeon(TM) CPU 2.80GHz stepping 08 Intel(R) Xeon(TM) CPU 2.80GHz stepping 08 Intel(R) Xeon(TM) CPU 2.80GHz stepping 08 Intel(R) Xeon(TM) CPU 2.80GHz stepping 08 Intel(R) Xeon(TM) CPU 2.80GHz stepping 08 Intel(R) Xeon(TM) CPU 2.80GHz stepping 08 Intel(R) Xeon(TM) CPU 2.80GHz stepping 08 Intel(R) Xeon(TM) CPU 2.80GHz stepping 08 Intel E7520/7320/7525 detected.<6>pci_hotplug: PCI Hot Plug PCI Core version: 0.5 Intel(R) PRO/1000 Network Driver - version 5.6.10.1-k2-NAPI Copyright (c) 1999-2004 Intel Corporation. e1000: eth0: e1000_probe: Intel(R) PRO/1000 Network Connection e1000: eth1: e1000_probe: Intel(R) PRO/1000 Network Connection e1000: eth2: e1000_probe: Intel(R) PRO/1000 Network Connection e1000: eth3: e1000_probe: Intel(R) PRO/1000 Network Connection I will disable NAPI, rebuild the kernel, and inform you the result. Maybe this also causes the problem of "large" message size message for Score-MPICH? Do you think I need to enable jumbl frame? Thank you very much for your kind support. Viet >-----Original Message----- >From: Shinji Sumimoto [mailto:s-sumi @ flab.fujitsu.co.jp] >Sent: Thursday, July 13, 2006 8:18 PM >To: viet @ sowa.is.uec.ac.jp >Cc: score-users @ pccluster.org; s-sumi @ flab.fujitsu.co.jp >Subject: Re: [SCore-users] Communication Performance with >Score, with and without network trunking > > >Dear Viet. > >I see.. > >Are you using NAPI option when you are building device driver? >Could you show the results of the following command? > >% dmesg |grep -i intel > >If you are using NAPI, could you re-build the e1000 driver >without NAPI option using "make menuconfig" command? > >Shinji. > _______________________________________________ SCore-users mailing list SCore-users @ pccluster.org http://www.pccluster.org/mailman/listinfo/score-users From viet @ sowa.is.uec.ac.jp Thu Jul 13 20:58:37 2006 From: viet @ sowa.is.uec.ac.jp (Viet) Date: Thu, 13 Jul 2006 20:58:37 +0900 Subject: [SCore-users-jp] RE: [SCore-users] Communication Performance with Score, with and without network trunking In-Reply-To: Message-ID: <000001c6a673$ace22730$ca4315ac@spc0608> Dear Atsushi Hori Sensei, My simple application tries to send messges with different sizes from process 0 to process 1. After receive a message, process 1 will print the result (execution time) to the screen. After each succesful send-receive action, message size will be added up. With our Score environment, the application stuck at size=2050 double precision (about 16KB). (The application works well with MPICH 1.2.5, 1.2.5, 1.2.7 without Score installation). Viet >-----Original Message----- >From: Atsushi HORI [mailto:hori @ allinea.com] >Sent: Thursday, July 13, 2006 8:48 PM >To: Viet >Cc: score-users @ pccluster.org >Subject: Re: [SCore-users] Communication Performance with >Score, with and without network trunking > > >Hi, Viet, > >Here is another support. > >On 2006/07/13, at 19:29, Viet wrote: > >> One more question, please. With Score MPICH, a large message (say, >> 2200 >> double, i.e. 2200x8 bye length) cannot be sent with MPI_Send >(blocking >> send). I tried to set P4_GLOBMEMSIZE variable to a large value but it >> does not work. Could you please give some advice. > >How do you know "cannot be sent with MPI_Send" ? What happens ? > >MPICH-SCore is not using p4. So any environment variables related to >p4 do not work. > > > _______________________________________________ SCore-users mailing list SCore-users @ pccluster.org http://www.pccluster.org/mailman/listinfo/score-users From viet @ sowa.is.uec.ac.jp Fri Jul 14 14:32:24 2006 From: viet @ sowa.is.uec.ac.jp (Viet) Date: Fri, 14 Jul 2006 14:32:24 +0900 Subject: [SCore-users-jp] RE: [SCore-users] Communication Performance with Score, with and without network trunking In-Reply-To: <20060713.201735.26970316.s-sumi@flab.fujitsu.co.jp> Message-ID: <000001c6a706$e2cbb080$ca4315ac@spc0608> Dear Shinji Sumimoto Sensei, I remove NAPI from the kernel, but I still could not achieve good communication performance: (withou NAPI) + ping: 8 0.000954131 + burst 8 106064 1400 1.79729 e+7 Viet >-----Original Message----- >From: Shinji Sumimoto [mailto:s-sumi @ flab.fujitsu.co.jp] >Sent: Thursday, July 13, 2006 8:18 PM >To: viet @ sowa.is.uec.ac.jp >Cc: score-users @ pccluster.org; s-sumi @ flab.fujitsu.co.jp >Subject: Re: [SCore-users] Communication Performance with >Score, with and without network trunking > > >Dear Viet. > >I see.. > >Are you using NAPI option when you are building device driver? >Could you show the results of the following command? > >% dmesg |grep -i intel > >If you are using NAPI, could you re-build the e1000 driver >without NAPI option using "make menuconfig" command? > >Shinji. > _______________________________________________ SCore-users mailing list SCore-users @ pccluster.org http://www.pccluster.org/mailman/listinfo/score-users From s-sumi @ flab.fujitsu.co.jp Sun Jul 16 22:04:01 2006 From: s-sumi @ flab.fujitsu.co.jp (Shinji Sumimoto) Date: Sun, 16 Jul 2006 22:04:01 +0900 (JST) Subject: [SCore-users-jp] RE: [SCore-users] Communication Performance with Score, with and without network trunking In-Reply-To: <000001c6a706$e2cbb080$ca4315ac@spc0608> References: <20060713.201735.26970316.s-sumi@flab.fujitsu.co.jp> <000001c6a706$e2cbb080$ca4315ac@spc0608> Message-ID: <20060716.220401.78708958.s-sumi@flab.fujitsu.co.jp> Dear Viet. Very qurious situation. Are you using PCI-Express version of E1000 NICs? Could you measure the performance on one NIC not trunking? or measure the performance using several parameters of maxnsend and backoff? (ex maxnsend 24, backoff 2400) PS: Are you using Hyper threading? From: "Viet" Subject: [SCore-users-jp] RE: [SCore-users] Communication Performance with Score, with and without network trunking Date: Fri, 14 Jul 2006 14:32:24 +0900 Message-ID: <000001c6a706$e2cbb080$ca4315ac @ spc0608> viet> Dear Shinji Sumimoto Sensei, viet> viet> I remove NAPI from the kernel, but I still could not achieve good viet> communication performance: viet> viet> (withou NAPI) viet> + ping: 8 0.000954131 viet> + burst 8 106064 viet> 1400 1.79729 e+7 viet> viet> Viet viet> viet> >-----Original Message----- viet> >From: Shinji Sumimoto [mailto:s-sumi @ flab.fujitsu.co.jp] viet> >Sent: Thursday, July 13, 2006 8:18 PM viet> >To: viet @ sowa.is.uec.ac.jp viet> >Cc: score-users @ pccluster.org; s-sumi @ flab.fujitsu.co.jp viet> >Subject: Re: [SCore-users] Communication Performance with viet> >Score, with and without network trunking viet> > viet> > viet> >Dear Viet. viet> > viet> >I see.. viet> > viet> >Are you using NAPI option when you are building device driver? viet> >Could you show the results of the following command? viet> > viet> >% dmesg |grep -i intel viet> > viet> >If you are using NAPI, could you re-build the e1000 driver viet> >without NAPI option using "make menuconfig" command? viet> > viet> >Shinji. viet> > viet> viet> viet> _______________________________________________ viet> SCore-users mailing list viet> SCore-users @ pccluster.org viet> http://www.pccluster.org/mailman/listinfo/score-users ------ Shinji Sumimoto, Fujitsu Labs _______________________________________________ SCore-users mailing list SCore-users @ pccluster.org http://www.pccluster.org/mailman/listinfo/score-users From tk @ Informatik.TU-Cottbus.DE Fri Jul 21 17:22:38 2006 From: tk @ Informatik.TU-Cottbus.DE (Thomas Kobienia) Date: Fri, 21 Jul 2006 10:22:38 +0200 Subject: [SCore-users-jp] [SCore-users] Experiences with dual-core cpus? Message-ID: <20060721082238.GB6848@mimir.Informatik.TU-Cottbus.DE> Dear list members, Does anybody have experiences with dual-core cups in compute hosts? We plan to expand our 8 dualcpu compute node cluster with 6 dual-core dualcpu compute nodes. We use a myrinet2k with fc. Our existing nodes have 2 intel xeon cpus. We've got offers with intel xeon dempsey and amd opteron 265 cpus. Is it possible to use the old and new nodes together? Are there known problems with such a mixed cluster? with best regards, Thomas Kobienia -- wissenschaftlicher Mitarbeiter LS Verteilte Systeme / Betriebssysteme Tel. +49 (355) 69-3480 oder 2272 _______________________________________________ SCore-users mailing list SCore-users @ pccluster.org http://www.pccluster.org/mailman/listinfo/score-users From tk @ Informatik.TU-Cottbus.DE Fri Jul 21 17:59:20 2006 From: tk @ Informatik.TU-Cottbus.DE (Thomas Kobienia) Date: Fri, 21 Jul 2006 10:59:20 +0200 Subject: [SCore-users-jp] Re: [SCore-users] Experiences with dual-core cpus? In-Reply-To: <44C09267.3080806@myri.com> References: <20060721082238.GB6848@mimir.Informatik.TU-Cottbus.DE> <44C09267.3080806@myri.com> Message-ID: <20060721085920.GA8171@mimir.Informatik.TU-Cottbus.DE> Dear Markus Fischer, Markus Fischer schrieb am Freitag, den 21. Juli 2006: > >We plan to expand our 8 dualcpu compute node cluster with 6 dual-core > >dualcpu compute nodes. We use a myrinet2k with fc. Our existing nodes > >have 2 intel xeon cpus. We've got offers with intel xeon dempsey and > >amd opteron 265 cpus. > > > >Is it possible to use the old and new nodes together? Are there known > >problems with such a mixed cluster? > hopefully you have D or later Myrinet cards to run MX. The old nodes have m3f-pci64b-2 cards. The new nodes offers are with myrinet cards D or F. Will this work together? with best regards, Thomas Kobienia -- wissenschaftlicher Mitarbeiter LS Verteilte Systeme / Betriebssysteme Tel. +49 (355) 69-3480 oder 2272 _______________________________________________ SCore-users mailing list SCore-users @ pccluster.org http://www.pccluster.org/mailman/listinfo/score-users From kameyama @ pccluster.org Thu Jul 27 11:39:49 2006 From: kameyama @ pccluster.org (Kameyama Toyohisa) Date: Thu, 27 Jul 2006 11:39:49 +0900 Subject: [SCore-users-jp] Re: [SCore-users] Experiences with dual-core cpus? In-Reply-To: <20060721085920.GA8171@mimir.Informatik.TU-Cottbus.DE> References: <20060721082238.GB6848@mimir.Informatik.TU-Cottbus.DE> <44C09267.3080806@myri.com> <20060721085920.GA8171@mimir.Informatik.TU-Cottbus.DE> Message-ID: <44C82775.4030807@pccluster.org> Thomas Kobienia wrote: > Markus Fischer schrieb am Freitag, den 21. Juli 2006: >>> We plan to expand our 8 dualcpu compute node cluster with 6 dual-core >>> dualcpu compute nodes. We use a myrinet2k with fc. Our existing nodes >>> have 2 intel xeon cpus. We've got offers with intel xeon dempsey and >>> amd opteron 265 cpus. >>> >>> Is it possible to use the old and new nodes together? Are there known >>> problems with such a mixed cluster? > >> hopefully you have D or later Myrinet cards to run MX. > > The old nodes have m3f-pci64b-2 cards. The new nodes offers are with > myrinet cards D or F. Will this work together? MX is supported only D or later cards. If you want to use M3F cards and D or F card under Myricom environment, you must use GM 2.1.x. Please see as following pages; http://www.myri.com/scs/linux-gm2.html If you use PM/myrinet2k, please try to use PM/Myrinetxp. Please change type to myrinetxp and -firmware:file to /opt/score/share/lanai/lanaixp.mcp In /opt/score/etc/scorehosts.db.) lanaixp.mcp include firmware for M3 cards, but I don't test it. Kameyama Toyohisa _______________________________________________ SCore-users mailing list SCore-users @ pccluster.org http://www.pccluster.org/mailman/listinfo/score-users From hori @ allinea.com Fri Jul 28 00:28:11 2006 From: hori @ allinea.com (Atsushi HORI) Date: Fri, 28 Jul 2006 00:28:11 +0900 Subject: [SCore-users-jp] Re: [SCore-users] SCore newbie In-Reply-To: <44C8D90D.9020903@us.cd-adapco.com> References: <44C8D90D.9020903@us.cd-adapco.com> Message-ID: <84DB95FA-A450-47E6-A723-7B1A493EF7A5@allinea.com> Hi, Noel, On 2006/07/28, at 0:17, Noel Rycroft wrote: > I'm a newbie to SCore and I'm having some problems running a simple > MPI program. > I can start the SCore shell and run unix commands using scout but > when I try and > run an MPI program I get an error which seems to indicate that it > can't connect to > an SCore daemon.... There are several ways to run an SCore MPI program. The simplest way is to run an SCore MPI program in the scout environment. In this case, no need of the scored option. scout hostname [cd-ia64]: cd-ia64 mpirun -np 1 ./runcpi ---- This way of invoking user program in the scout environment is called single user mode. The sctop command is NOT effective in the single user mode. If you want to try the multi-user mode or you have any other problems and/or questions, just simply ask me. _______________________________________________ SCore-users mailing list SCore-users @ pccluster.org http://www.pccluster.org/mailman/listinfo/score-users From hori @ allinea.com Fri Jul 28 01:05:46 2006 From: hori @ allinea.com (Atsushi HORI) Date: Fri, 28 Jul 2006 01:05:46 +0900 Subject: [SCore-users-jp] Re: [SCore-users] SCore newbie In-Reply-To: <44C8DEC5.2010508@us.cd-adapco.com> References: <44C8D90D.9020903@us.cd-adapco.com> <84DB95FA-A450-47E6-A723-7B1A493EF7A5@allinea.com> <44C8DEC5.2010508@us.cd-adapco.com> Message-ID: On 2006/07/28, at 0:41, Noel Rycroft wrote: > ./score/bin/mpirun -np 1 ./runcpi > FEP: Unable to connect with SCore-D (cd-ia64) Aha, check the SCORE_OPTIONS environment variable. I think the scored options is already set in the environment. _______________________________________________ SCore-users mailing list SCore-users @ pccluster.org http://www.pccluster.org/mailman/listinfo/score-users From noel.rycroft @ us.cd-adapco.com Fri Jul 28 00:17:33 2006 From: noel.rycroft @ us.cd-adapco.com (Noel Rycroft) Date: Thu, 27 Jul 2006 11:17:33 -0400 Subject: [SCore-users-jp] [SCore-users] SCore newbie Message-ID: <44C8D90D.9020903@us.cd-adapco.com> Hi, I'm a newbie to SCore and I'm having some problems running a simple MPI program. I can start the SCore shell and run unix commands using scout but when I try and run an MPI program I get an error which seems to indicate that it can't connect to an SCore daemon.... scout hostname [cd-ia64]: cd-ia64 mpirun -np 1 -score scored=cd-ia64 ./runcpi FEP: Unable to connect with SCore-D (cd-ia64) Also if I try and do sctop cd-ia64 it hangs whilst trying to connect to sctop cd-ia64 SCTOP: Waiting for SCore-D response ... Is there a way I can check that the SCore environment is up and running correctly...? many thanks, Noel.. _______________________________________________ SCore-users mailing list SCore-users @ pccluster.org http://www.pccluster.org/mailman/listinfo/score-users From dclarke @ blastwave.org Fri Jul 28 00:24:13 2006 From: dclarke @ blastwave.org (Dennis Clarke) Date: Thu, 27 Jul 2006 11:24:13 -0400 (EDT) Subject: [SCore-users-jp] [SCore-users] SCore on Solaris ? Message-ID: <61252.70.50.142.6.1154013853.squirrel@mail.blastwave.org> Has anyone tried to do this port ? -- Dennis Clarke _______________________________________________ SCore-users mailing list SCore-users @ pccluster.org http://www.pccluster.org/mailman/listinfo/score-users From dclarke @ blastwave.org Fri Jul 28 00:40:04 2006 From: dclarke @ blastwave.org (Dennis Clarke) Date: Thu, 27 Jul 2006 11:40:04 -0400 (EDT) Subject: [SCore-users-jp] Re: [SCore-users] SCore on Solaris ? In-Reply-To: <1FE7A25D-35EF-4249-B55D-0560318B50A9@swimmy-soft.com> References: <61252.70.50.142.6.1154013853.squirrel@mail.blastwave.org> <1FE7A25D-35EF-4249-B55D-0560318B50A9@swimmy-soft.com> Message-ID: <61265.70.50.142.6.1154014804.squirrel@mail.blastwave.org> > Hi, Dennis, > > On 2006/07/28, at 0:24, Dennis Clarke wrote: > >> Has anyone tried to do this port ? >> >> -- >> Dennis Clarke > > Nobody yet, as far as I know. > That leaves me swimming in deep water then I guess. :-\ -- Dennis Clarke _______________________________________________ SCore-users mailing list SCore-users @ pccluster.org http://www.pccluster.org/mailman/listinfo/score-users From noel.rycroft @ us.cd-adapco.com Fri Jul 28 00:41:57 2006 From: noel.rycroft @ us.cd-adapco.com (Noel Rycroft) Date: Thu, 27 Jul 2006 11:41:57 -0400 Subject: [SCore-users-jp] Re: [SCore-users] SCore newbie In-Reply-To: <84DB95FA-A450-47E6-A723-7B1A493EF7A5@allinea.com> References: <44C8D90D.9020903@us.cd-adapco.com> <84DB95FA-A450-47E6-A723-7B1A493EF7A5@allinea.com> Message-ID: <44C8DEC5.2010508@us.cd-adapco.com> Dear Atsushi, thanks for your reply but even in single user mode I get the same error. It seems that something in the score environment isn't working but I'm not smart enough to work out what it is... scout hostname [cd-ia64]: cd-ia64 ./score/bin/mpirun -np 1 ./runcpi FEP: Unable to connect with SCore-D (cd-ia64) many thanks, noel.. Atsushi HORI wrote: > Hi, Noel, > > On 2006/07/28, at 0:17, Noel Rycroft wrote: > >> I'm a newbie to SCore and I'm having some problems running a simple >> MPI program. >> I can start the SCore shell and run unix commands using scout but >> when I try and >> run an MPI program I get an error which seems to indicate that it >> can't connect to >> an SCore daemon.... > > > There are several ways to run an SCore MPI program. > > The simplest way is to run an SCore MPI program in the scout > environment. In this case, no need of the scored option. > > scout hostname > [cd-ia64]: > cd-ia64 > > mpirun -np 1 ./runcpi > > ---- > This way of invoking user program in the scout environment is called > single user mode. The sctop command is NOT effective in the single > user mode. > > If you want to try the multi-user mode or you have any other problems > and/or questions, just simply ask me. > > > _______________________________________________ SCore-users mailing list SCore-users @ pccluster.org http://www.pccluster.org/mailman/listinfo/score-users From noel.rycroft @ us.cd-adapco.com Fri Jul 28 01:29:27 2006 From: noel.rycroft @ us.cd-adapco.com (Noel Rycroft) Date: Thu, 27 Jul 2006 12:29:27 -0400 Subject: [SCore-users-jp] Re: [SCore-users] SCore newbie In-Reply-To: References: <44C8D90D.9020903@us.cd-adapco.com> <84DB95FA-A450-47E6-A723-7B1A493EF7A5@allinea.com> <44C8DEC5.2010508@us.cd-adapco.com> Message-ID: <44C8E9E7.8070701@us.cd-adapco.com> You were right... SCORE_OPTIONS was set... This cleared up the first problem but I now find another problem!! mpirun -np 1 ./runcpi /opt/score/deploy/bin.ia64-rhel3-linux2_4/scored.exe: error while loading shared libraries: libscorecommon_so.so: cannot open shared object file: No such file or directory I've checked the LD_LIBRARY_PATH and it includes echo $LD_LIBRARY_PATH /opt/score/deploy/lib.ia64-rhel3-linux2_4:/usr/lib:/usr/local......................... and the file libscorecommon_so.so exists in ls -lsa /opt/score/deploy/lib.ia64-rhel3-linux2_4/libscorecommon_so.so 224 -rw-r--r-- 1 root root 223890 Jun 21 17:27 /opt/score/deploy/lib.ia64-rhel3-linux2_4/libscorecommon_so.so any clues...? many thanks, noel. Atsushi HORI wrote: > > On 2006/07/28, at 0:41, Noel Rycroft wrote: > >> ./score/bin/mpirun -np 1 ./runcpi >> FEP: Unable to connect with SCore-D (cd-ia64) > > > Aha, check the SCORE_OPTIONS environment variable. I think the scored > options is already set in the environment. > > > -- ================================================================== Noel Rycroft Software Engineer CD-adapco NEW Tel: 212-678-0927 http://www.cd-adapco.com Fax: 603-643-9994 ================================================================== _______________________________________________ SCore-users mailing list SCore-users @ pccluster.org http://www.pccluster.org/mailman/listinfo/score-users From kameyama @ pccluster.org Fri Jul 28 09:32:37 2006 From: kameyama @ pccluster.org (Kameyama Toyohisa) Date: Fri, 28 Jul 2006 09:32:37 +0900 Subject: [SCore-users-jp] Re: [SCore-users] SCore newbie In-Reply-To: <44C8E9E7.8070701@us.cd-adapco.com> References: <44C8D90D.9020903@us.cd-adapco.com> <84DB95FA-A450-47E6-A723-7B1A493EF7A5@allinea.com> <44C8DEC5.2010508@us.cd-adapco.com> <44C8E9E7.8070701@us.cd-adapco.com> Message-ID: <44C95B25.5090606@pccluster.org> Noel Rycroft wrote: > mpirun -np 1 ./runcpi > /opt/score/deploy/bin.ia64-rhel3-linux2_4/scored.exe: error while > loading shared libraries: libscorecommon_so.so: cannot open shared > object file: No such file or directory > I've checked the LD_LIBRARY_PATH and it includes > > echo $LD_LIBRARY_PATH > /opt/score/deploy/lib.ia64-rhel3-linux2_4:/usr/lib:/usr/local......................... > > > and the file libscorecommon_so.so exists in > > ls -lsa /opt/score/deploy/lib.ia64-rhel3-linux2_4/libscorecommon_so.so > 224 -rw-r--r-- 1 root root 223890 Jun 21 17:27 > /opt/score/deploy/lib.ia64-rhel3-linux2_4/libscorecommon_so.so Please check the file is exists on the compute hosts: $ scout ls -lsa /opt/score/deploy/lib.ia64-rhel3-linux2_4/libscorecommon_so.so Kameyama Toyohisa _______________________________________________ SCore-users mailing list SCore-users @ pccluster.org http://www.pccluster.org/mailman/listinfo/score-users From noel.rycroft @ us.cd-adapco.com Fri Jul 28 22:42:15 2006 From: noel.rycroft @ us.cd-adapco.com (Noel Rycroft) Date: Fri, 28 Jul 2006 09:42:15 -0400 Subject: [SCore-users-jp] Re: [SCore-users] SCore newbie In-Reply-To: <44C95B25.5090606@pccluster.org> References: <44C8D90D.9020903@us.cd-adapco.com> <84DB95FA-A450-47E6-A723-7B1A493EF7A5@allinea.com> <44C8DEC5.2010508@us.cd-adapco.com> <44C8E9E7.8070701@us.cd-adapco.com> <44C95B25.5090606@pccluster.org> Message-ID: <44CA1437.4070706@us.cd-adapco.com> Kameyama Toyohisa wrote: >Noel Rycroft wrote: > > >>mpirun -np 1 ./runcpi >>/opt/score/deploy/bin.ia64-rhel3-linux2_4/scored.exe: error while >>loading shared libraries: libscorecommon_so.so: cannot open shared >>object file: No such file or directory >>I've checked the LD_LIBRARY_PATH and it includes >> >>echo $LD_LIBRARY_PATH >> >> >> >/opt/score/deploy/lib.ia64-rhel3-linux2_4:/usr/lib:/usr/local......................... > > > >>and the file libscorecommon_so.so exists in >> >>ls -lsa /opt/score/deploy/lib.ia64-rhel3-linux2_4/libscorecommon_so.so >>224 -rw-r--r-- 1 root root 223890 Jun 21 17:27 >>/opt/score/deploy/lib.ia64-rhel3-linux2_4/libscorecommon_so.so >> >> > >Please check the file is exists on the compute hosts: > $ scout ls -lsa >/opt/score/deploy/lib.ia64-rhel3-linux2_4/libscorecommon_so.so > >Kameyama Toyohisa > > > Hi Kameyama, Thanks for your suggestion but I only have one compute host... We're trying to set up a test system. So I get the same result... scout ls -lsa /opt/score/deploy/lib.ia64-rhel3-linux2_4/libscorecommon_so.so [cd-ia64]: 224 -rw-r--r-- 1 root root 223890 Jun 21 17:27 /opt/score/deploy/lib.ia64-rhel3-linux2_4/libscorecommon_so.so noel @ cd-ia64:~/scoretest> ls -lsa /opt/score/deploy/lib.ia64-rhel3-linux2_4/libscorecommon_so.so 224 -rw-r--r-- 1 root root 223890 Jun 21 17:27 /opt/score/deploy/lib.ia64-rhel3-linux2_4/libscorecommon_so.so thanks, Noel.. _______________________________________________ SCore-users mailing list SCore-users @ pccluster.org http://www.pccluster.org/mailman/listinfo/score-users