server.pccluster.org
pcc
scoreboard
(8) database. You can see the name of the group
defined with the group=name
attribute in the database
configuration file, /opt/score/etc/scorehosts.db
. An example is
given in
Configure the scoreboard
database in the Installation Guide.
comp3.pccluster.org
Also, the bash
(1) shell is used in the examples.
PATH=${PATH}:/opt/score/bin SCBDSERV=server.pccluster.org export SCBDSERV |
ulimit -d unlimited ulimit -u unlimited |
set path=($path /opt/score/bin) setenv SCBDSERV server.pccluster.org limit descriptors unlimited |
limit datasize unlimited limit maxproc unlimited |
msgb
(1),
is an X Window program to watch the activities on the cluster.
Start the program as follows (make sure you have set up your
DISPLAY
environment variable and you can display X Window
programs before starting the program):
$ msgb -group pcc &
pcc
with the
name of a group that has been defined in the scoreboard
database.
An example is given in
Configure the scoreboard
database in the Installation Guide.
msgb
window will be displayed on your X terminal as shown left.
Each box represents a host and shows the activity occupying it.
A blue box indicates that no applications are running on the host.
A red box indicates that the host is locked by a user or the cluster is running
in the Multi-User Environment. A pink box indicates
that SCOOP is locking the host while temporarily collecting data.
The following example uses scout(1) to provide exclusive use of cluster hosts for each remote invocation of commands:
1 $ scout -g pcc 2 SCOUT: Spawn done. 3 SCOUT: session started 4 $ scout 5 [comp0-3]: 6 SCOUT(3.2.0): Ready. 7 $ ls 8 index.html start.html 9 $ scout ls 10 [comp0-3]: 11 index.html 12 start.html 13 $ date 14 Mon Nov 1 09:25:31 JST 1999 15 $ scout date 16 [comp0-3]: 17 Mon Nov 1 09:25:34 JST 1999 18 $ exit 19 SCOUT: session done 20 $scout uses the Compute Host Lock Server to lock all cluster hosts when the SCOUT session is first started, and also when the session is exited. It is also locked for each remote command execution, but not for local commands, such as the first ls command on line 7. The SCOUT session is started on line 1. The -g option is used to specify the host group, known to scoreboard(8). A shell on each cluster host is spawned and a final message is output on lines 2 and 3. You are now in a SCOUT session. Executing the scout command without any options gets the status from each SCOUT shell on the cluster hosts. scout can also invoke a shell on the local machine, as shown in lines 7 and 13. A shell command prefixed by scout is executed on every cluster host with the output merged if the results are the same.
When the cluster is configured for a multi-user environment you can use the
SCORE_OPTIONS
environment variable to communicate with the
scored(8) daemon process
which will time-slice your command.
SCORE_OPTIONS=scored=comp3.pccluster.org export SCORE_OPTIONS |
setenv SCORE_OPTIONS scored=comp3.pccluster.org |
Again, scrun(1) must be used to execute parallel programs in a multi-user environment. MPI programs can also run under the SCore 3.2 parallel programming environment by specifying SCore options of the command line:
$ mpirun -np 4 -score scored=comp3.pccluster.org ./a.out
![]() |
PC Cluster Consortium |