PM/SCI is implemented using ScaSCI provided by Scali. You need the ScaSCI, ScaSCIadap, and ScasSCIddk environment. Because we do not provide the binary image for PM/SCI, you must build the system from the SCore distribution source.
First of all, please install the SCore system for Ethernet using the EIT tool or by hand.
# rpm -i ScaSCI.rpm
# rpm -i ScaSCIddk.rpm
# rpm -i ScaSCIadap.rpm
To make PM/SCI, you must add following line definition in the site dependent file:
SCASCI_HOME = /opt/scali
Issue the following commands to make SCore with PM/SCI:
# cd /opt/score/score-src/
# echo SCASCI_HOME = /opt/scali >> adm/config/site
# make distclean
# ./configure --option=site
# make
# rsh-all -g pcc -norsh rdist -c /opt/score/deploy @host:
# rpm -i ScaSCI.rpm
# rpm -i ScaSCIddk.rpm
# rpm -i ScaSCIadap.rpm
Then, compiling the module on all compute hosts as follows:
# KERNEL_HEADER_BASE=/usr/src
# KERNEL_DIR=linux-2.4
# export KERNEL_HEADER_BASE KERNEL_DIR
# mkdir -p ~/scascibuild
# cd ~/scascibuild
# /opt/scali/kernel/rebuild/configure
# make
The ScaSCI driver "Makefile" assumes the SSP environment.
If you do not have the SSP environment,
The ~/scascibuild/Linux/Makefile file contains the following entry.
install:
scarcp libkmatch.a $K/userkv`cat uname.ver`.a
scash rm -f $K/ssci.o
The above entry should be replaced with the following entry:
install:
cp libkmatch.a $K/userkv`cat uname.ver`.a
rm -f $K/ssci.o
After the modification, the ScaSCI module is installed as follows:
# make install
KERNEL_DIR = /usr/src/linux-2.4
Then, compile the driver as follows:
# rsh-all -g pcc -norsh rdist -c /opt/score/score-src/SCore/build @host:
# rsh-all -g pcc -norsh rdist -c /opt/score/score-src/SCore/pm2 @host:
# scout -g pcc
# cd /opt/score/score-src/SCore/pm2/arch/sci/driver/linux
# scout make
# scout 'cp /opt/score/score-src/SCore/pm2/arch/sci/driver/linux/pm_sci.o \
/lib/modules/`uname -r`/kernel/drivers/char'
# scout cp /opt/score/score-src/SCore/pm2/arch/sci/tools/pm_sci.rc /etc/rc.d/init.d/pm_sci
# scout /sbin/chkconfig --add pm_sci
# scout /opt/score/score-src/SCore/pm2/arch/sci/tools/mkpmscidev
# exit
# cd /opt/score/score-src/SCore/pm2/arch/sci/tools # ./mkpmsciconf -g pcc -set -f ./pmsci.conf # mv ./pmsci.conf /opt/score/etc/pmsci.conf
/* PM/Ethernet */
ethernet type=ethernet -config:file=/opt/score/etc/pm-ethernet.conf
/* PM/SCI */
sci type=sci -config:file=/opt/score/etc/pmsci.conf
#define PCC MSGBSERV(server.pccluster.org:8766) \
cpugen=pentium-iii speed=800 smp=1 \
network=ethernet,sci group=pcc
comp0.pccluster.org PCC
comp1.pccluster.org PCC
comp2.pccluster.org PCC
comp3.pccluster.org PCC
To check if Omni OpenMP works on SCASH using PM/SCI, compile and run the following example programs under the /opt/omni/lib/openmp/examples/scash-test/ directory.
$ cd /opt/omni/lib/openmp/examples/scash-test/cg
$ make
$ ./cg-makedata data
final nonzero count in sparse number of nonzeros = 1853104
Fri Nov 2 17:34:48 JST 2001
$ scout -g pcc
$ scrun ./cg-omp data
SCore-D 4.2 connected.
<0:0> SCORE: 4 nodes (4x1) ready.
read file ..
omp_num_thread=1
omp_max_thread=4
read done ...
0 1.4483e-13 19.99975812770398
1 1.3444e-15 17.11404957455056
2 1.3412e-15 17.12966689461433
3 1.3074e-15 17.13021135811924
4 1.2853e-15 17.13023388563526
5 1.2249e-15 17.13023498794818
6 1.2164e-15 17.13023504989156
7 1.1879e-15 17.13023505375095
8 1.1349e-15 17.13023505401005
9 1.1087e-15 17.13023505402842
10 1.0767e-15 17.13023505402978
11 1.1344e-15 17.13023505402988
12 1.0106e-15 17.13023505402988
13 9.9676e-16 17.13023505402989
14 9.3180e-16 17.13023505402989
time = 88.405219, 5.893681 (0.000000e+00 - 8.840522e+01)/15, NITCG=25
$ exit
$ cd /opt/omni/lib/openmp/examples/scash-test/laplace $ make $ scout -g pcc $ scrun ./lap-omp SCore-D 4.2 connected. <0:0> SCORE: 4 nodes (4x1) ready. sum = 0.579817 iter=50, time=4.40174 $ exit
$ cd /opt/omni/lib/openmp/examples/scash-test/laplace-f77 $ make $ scout -g pcc $ scrun ./lap-omp SCore-D 4.2 connected. <0:0> SCORE: 4 nodes (4x1) ready. sum= 52.1801176 time= 4.04782808 end.. $ exit
![]() |
PC Cluster Consotium |