bash
(1) shell is used in the examples.
SCBDSERV
and PATH
environment
variables
If you have installed the system by your hand, please make sure that
the SCBDSERV shell environment is set. If not, please login again.
If you still do not see the variable, please make sure that
a profile file for your login shell has been created under the
/etc/profile.d
, which is described in the server host
settings section.
$ msgb -group pcc &
pcc
is the name of the group you defined in
the scoreboard
database.hello.cc
. A copy of
the program is located in
/opt/score/example/mttl/hello.cc
:
Compile it with#include <stdio.h> #include <mpcxx.h> main(int argc, char **argv) { mpcxx_spmd_initialize(argc, argv); printf("hello, world (from node %d)\n", myNode); exit(0); }
mpc++
:
$ mpc++ -o hello hello.cc
Execute the program locally:
Execute the program on four nodes of the cluster:$ scrun -nodes=1 ./hello SCORE: connected (jid=100) <0:0> SCORE: 1 node ready. hello, world (from node 0) $
$ scrun -nodes=4 ./hello SCORE: connected (jid=100) <0:0> SCORE: 4 nodes (4xq) ready. hello, world (from node 2) hello, world (from node 1) hello, world (from node 3) hello, world (from node 0) $
/opt/score/example/mpi/cpi.c
to your working directory
and compile it with
mpicc
:
$ mpicc -o cpi cpi.c -lm
Execute the program locally using both scrun
and
mpirun
:
Execute the program on four nodes of the cluster using both$ scrun -nodes=1 ./cpi SCORE: connected (jid=100) <0:0> SCORE: 1 node ready. Process 0 of 1 on comp3.score.rwcp.or.jp pi is approximately 3.1416009869231254, Error is 0.0000083333333323 wall clock time = 0.000621 $ mpirun -np 1 ./cpi SCORE: connected (jid=100) <0:0> SCORE: 1 node ready. Process 0 of 1 on comp3.score.rwcp.or.jp pi is approximately 3.1416009869231254, Error is 0.0000083333333323 wall clock time = 0.000645 $
scrun
and mpirun
:
$ scrun -nodes=4 ./cpi SCORE: connected (jid=100) <0:0> SCORE: 4 nodes (4x1) ready. Process 1 of 4 on comp1.score.rwcp.or.jp Process 3 of 4 on comp3.score.rwcp.or.jp Process 2 of 4 on comp2.score.rwcp.or.jp Process 0 of 4 on comp0.score.rwcp.or.jp pi is approximately 3.1416009869231245, Error is 0.0000083333333314 wall clock time = 0.000945 $ mpirun -np 4 ./cpi SCORE: connected (jid=100) <0:0> SCORE: 4 nodes (4x1) ready. Process 2 of 4 on comp2.score.rwcp.or.jp Process 1 of 4 on comp1.score.rwcp.or.jp Process 0 of 4 on comp0.score.rwcp.or.jp Process 3 of 4 on comp3.score.rwcp.or.jp pi is approximately 3.1416009869231245, Error is 0.0000083333333314 wall clock time = 0.003627 $
$ exit SCOUT: session done $
scout
and the SCore-D operating system as
root
. The startup of scored
will take a few seconds
to complete:
You will see the node blocks in the$ /bin/su - # export SCBDSERV=`hostname` # export PATH=$PATH:/opt/score/bin:/opt/score/sbin:/opt/score/deploy # scout -g pcc SCOUT: Spawn done. SCOUT: session started # scored SYSLOG: Timeslice is set to 500[ms] SYSLOG: Cluster[0]: comp0.score.rwcp.or.jp@0...comp3.score.rwcp.or.jp@3 SYSLOG: BIN=linux, CPUGEN=pentium-iii, SMP=1, SPEED=500 SYSLOG: Network[0]: myrinet/myrinet SYSLOG: SCore-D network: myrinet/myrinet SYSLOG: SCore-D server: comp3.score.rwcp.or.jp:9901
msgb
window change from blue
to red.scored
server is running. By default, this is the
last host of the cluster group:
You will see output messages on the shell where$ scrun -scored=comp3,nodes=1 ./hello SCORE: connected (jid=100) <0:0> SCORE: 1 node ready. hello, world (from node 0) $
scored
was
invoked, similar to the following:
Execute the program on four nodes of the cluster group:SYSLOG: Login request: user1@host1.score.rwcp.or.jp:4556 SYSLOG: Login accepted: user1@host1.score.rwcp.or.jp:4556, JOB-ID: 100 Nodes: 3-3, ./hello SYSLOG: Logout: user1@host1.score.rwcp.or.jp:4556, JOB-ID: 100
$ scrun -scored=comp3,nodes=4 ./hello SCORE: connected (jid=200) <0:0> SCORE: 4 nodes (4x1) ready. hello, world (from node 2) hello, world (from node 1) hello, world (from node 3) hello, world (from node 0) $
scrun
and
mpirun
:
Execute the program on four nodes of the cluster using both$ export SCORE_OPTIONS=scored=comp3 $ scrun -nodes=1 ./cpi SCORE: connected (jid=300) <0:0> SCORE: 1 node ready. Process 0 of 1 on comp3.score.rwcp.or.jp pi is approximately 3.1416009869231254, Error is 0.0000083333333323 wall clock time = 0.000621 $ mpirun -np 1 ./cpi SCORE: connected (jid=400) <0:0> SCORE: 1 node ready. Process 0 of 1 on comp3.score.rwcp.or.jp pi is approximately 3.1416009869231254, Error is 0.0000083333333323 wall clock time = 0.000645 $
scrun
and mpirun
:
If you wish to test more example programs you will find them located under the$ scrun -nodes=4 ./cpi SCORE: connected (jid=500) <0:0> SCORE: 4 nodes (4x1) ready. Process 1 of 4 on comp1.score.rwcp.or.jp Process 3 of 4 on comp3.score.rwcp.or.jp Process 2 of 4 on comp2.score.rwcp.or.jp Process 0 of 4 on comp0.score.rwcp.or.jp pi is approximately 3.1416009869231245, Error is 0.0000083333333314 wall clock time = 0.000945 $ mpirun -np 4 ./cpi SCORE: connected (jid=600) <0:0> SCORE: 4 nodes (4x1) ready. Process 2 of 4 on comp2.score.rwcp.or.jp Process 1 of 4 on comp1.score.rwcp.or.jp Process 0 of 4 on comp0.score.rwcp.or.jp Process 3 of 4 on comp3.score.rwcp.or.jp pi is approximately 3.1416009869231245, Error is 0.0000083333333314 wall clock time = 0.003627 $
/opt/score/example
directory.
scored
:
You will see output messages on the shell where# sc_console comp3 -c shutdown #
scored
was
invoked similar to the following:
You will see the node blocks in theSYSLOG: SCore-D shutdown. SYSLOG: CONSOLE SHUTDOWN #
msgb
window change back from
red to blue.scout
session:
# exit SCOUT: session done #