Getting Started


  1. Document Conventions
    The following settings are used throughout this guide:

    server.pccluster.org
    This refers to the host where the cluster database server, scoreboard(8), is running.

    pcc
    This refers to the name given to a group of hosts defined in the scoreboard(8) database. You can see the name of the group defined with the group=name attribute in the database configuration file, /opt/score/etc/scorehosts.db. An example is given in Configure the scoreboard database in the Installation Guide.

    comp3.pccluster.org
    This refers to the host where the SCore-D server is running, which, by default, is the last host of the cluster.

    Also, the bash(1) shell is used in the examples.

  2. Setting up your login shell
  3. Invoking the Compute Host Lock Client
    The Compute Host Lock Client, msgb(1), is an X Window program to watch the activities on the cluster. Start the program as follows (make sure you have set up your DISPLAY environment variable and you can display X Window programs before starting the program):
    $ msgb -group pcc &
    
    [window image of msgb] Make sure that you replace pcc with the name of a group that has been defined in the scoreboard database. An example is given in Configure the scoreboard database in the Installation Guide.

    An msgb window will be displayed on your X terminal as shown left. Each box represents a host and shows the activity occupying it. A blue box indicates that no applications are running on the host. A red box indicates that the host is locked by a user or the cluster is running in the Multi-User Environment. A pink box indicates that SCOOP is locking the host while temporarily collecting data.

  4. Running in the Single-User Environment

    The following example uses scout(1) to provide exclusive use of cluster hosts for each remote invocation of commands:

     1 $ scout -g pcc
     2 SCOUT: Spawn done.   
     3 SCOUT: session started
     4 $ scout
     5 [comp0-3]:
     6 SCOUT(3.2.0): Ready.
     7 $ ls
     8 index.html  start.html
     9 $ scout ls
    10 [comp0-3]:
    11 index.html
    12 start.html
    13 $ date
    14 Mon Nov  1 09:25:31 JST 1999
    15 $ scout date
    16 [comp0-3]:
    17 Mon Nov  1 09:25:34 JST 1999
    18 $ exit
    19 SCOUT: session done
    20 $
    
    scout uses the Compute Host Lock Server to lock all cluster hosts when the SCOUT session is first started, and also when the session is exited. It is also locked for each remote command execution, but not for local commands, such as the first ls command on line 7. The SCOUT session is started on line 1. The -g option is used to specify the host group, known to scoreboard(8). A shell on each cluster host is spawned and a final message is output on lines 2 and 3. You are now in a SCOUT session. Executing the scout command without any options gets the status from each SCOUT shell on the cluster hosts. scout can also invoke a shell on the local machine, as shown in lines 7 and 13. A shell command prefixed by scout is executed on every cluster host with the output merged if the results are the same.

  5. Running in the Multi-User Environment

    When the cluster is configured for a multi-user environment you can use the SCORE_OPTIONS environment variable to communicate with the scored(8) daemon process which will time-slice your command.

    Again, scrun(1) must be used to execute parallel programs in a multi-user environment. MPI programs can also run under the SCore 3.2 parallel programming environment by specifying SCore options of the command line:

    $ mpirun -np 4 -score scored=comp3.pccluster.org ./a.out
    

  6. The Programming Environment


PCCC logo PC Cluster Consotium

CREDIT
This document is a part of the SCore cluster system software developed at PC Cluster Consortium, Japan. Copyright (C) 2003 PC Cluster Consortium.