SCore-D Test Procedure


Try the following test procedure on your server host to verify your cluster operation.
The bash(1) shell is used in the examples.
  1. Set the SCBDSERV and PATH environment variables

    If you have installed the system by your hand, please make sure that the SCBDSERV shell environment is set. If not, please login again. If you still do not see the variable, please make sure that a profile file for your login shell has been created under the /etc/profile.d, which is described in the server host settings section.

  2. Invoke the Compute Host Lock Client
    $ msgb -group pcc &
    pcc is the name of the group you defined in the scoreboard database.

  3. Compile and execute an MPC++ MTTL program

    Create the following program and call it hello.cc. A copy of the program is located in /opt/score/example/mttl/hello.cc:
    #include <stdio.h>
    #include <mpcxx.h>
    main(int argc, char **argv) {
        mpcxx_spmd_initialize(argc, argv);
        printf("hello, world (from node %d)\n", myNode);
        exit(0);
    }
    Compile it with mpc++:
    $ mpc++ -o hello hello.cc
    Execute the program on single CPU:
    $ scrun -nodes=1 ./hello
    SCore-D 5.0.0 connected.
    <0:0> SCORE: One node ready.
    hello, world (from node 0)
    $ 
    Execute the program on four nodes of the cluster:
    $ scrun -nodes=4 ./hello
    SCore-D 5.0.0 connected.
    <0:0> SCORE: 4 nodes (4x1) ready.
    hello, world (from node 2)
    hello, world (from node 1)
    hello, world (from node 3)
    hello, world (from node 0)
    $ 
  4. Compile and execute an MPICH-SCore program

    Copy the program /opt/score/example/mpi/cpi.c to your working directory and compile it with mpicc:
    $ mpicc -o cpi cpi.c -lm
    Execute the program single cpu using both scrun and mpirun:
    $ scrun -nodes=1 ./cpi
    SCore-D 5.0.0 connected.
    <0:0> SCORE: One node ready.
    Process 0 of 1 on comp3.pccluster.org
    pi is approximately 3.1416009869231254, Error is 0.0000083333333323
    wall clock time = 0.000621
    $ mpirun -np 1 ./cpi
    SCore-D 5.0.0 connected.
    <0:0> SCORE: One node ready.
    Process 0 of 1 on comp3.pccluster.org
    pi is approximately 3.1416009869231254, Error is 0.0000083333333323
    wall clock time = 0.000645
    $ 
    Execute the program on four nodes of the cluster using both scrun and mpirun:
    $ scrun -nodes=4 ./cpi
    SCORE: connected (jid=100)
    <0:0> SCORE: 4 nodes (4x1) ready.
    Process 1 of 4 on comp1.pccluster.org
    Process 3 of 4 on comp3.pccluster.org
    Process 2 of 4 on comp2.pccluster.org
    Process 0 of 4 on comp0.pccluster.org
    pi is approximately 3.1416009869231245, Error is 0.0000083333333314
    wall clock time = 0.000945
    $ mpirun -np 4 ./cpi
    SCORE: connected (jid=100)
    <0:0> SCORE: 4 nodes (4x1) ready.
    Process 2 of 4 on comp2.pccluster.org
    Process 1 of 4 on comp1.pccluster.org
    Process 0 of 4 on comp0.pccluster.org
    Process 3 of 4 on comp3.pccluster.org
    pi is approximately 3.1416009869231245, Error is 0.0000083333333314
    wall clock time = 0.003627
    $ 
  5. Exit the Single-User Environment
    $ exit
    SCOUT: session done
    $ 
  6. Start the SCore-D operating system for the Multi-User Environment

    Start scout and the SCore-D operating system as root. The startup of scored will take a few seconds to complete:
    $ /bin/su -
    # export SCBDSERV=`hostname`
    # export PATH=$PATH:/opt/score/bin:/opt/score/sbin:/opt/score/deploy
    # scout -g pcc
    SCOUT: Spawn done.
    SCOUT: session started
    # scored
    SYSLOG: /opt/score5.0.0/deploy/scored
    SYSLOG: SCore-D 5.0.0  $Id: init.cc,v 1.66 2002/02/13 04:18:40 hori Exp $
    SYSLOG: Compile option(s): 
    SYSLOG: SCore-D network: myrinet/myrinet
    SYSLOG: Cluster[0]: (0..3)x1.i386-redhat7-linux2_4.i686.500
    SYSLOG:   Memory: 249[MB], Swap: 259[MB], Disk: 3035[MB]
    SYSLOG:   Network[0]: myrinet/myrinet
    SYSLOG:   Network[1]: ethernet/ethernet
    SYSLOG: Scheduler initiated: Timeslice = 500 [msec]
    SYSLOG:   Queue[0] activated, exclusive scheduling
    SYSLOG:   Queue[1] activated, time-sharing scheduling
    SYSLOG:   Queue[2] activated, time-sharing scheduling
    SYSLOG: Session ID: 0
    SYSLOG: Server Host: comp3.pccluster.org
    SYSLOG: Backup Host: comp1.pccluster.org
    SYSLOG: Backup file is lost and create it.
    SYSLOG: Server file is lost and create it.
    SYSLOG: Operated by: root
    SYSLOG: ========= SCore-D (5.0.0) bootup in SECURE MODE ========
    
    You will see the node blocks in the msgb window change from blue to red.

  7. Execute an MPC++ MTTL program under the Multi-User Environment

    In a different shell, execute the program locally. You must specify the host where scored server is running. By default, this is the last host of the cluster group:
    $ scrun -scored=comp3,nodes=1 ./hello
    SCore-D 5.0.0 connected (jid=1).
    <0:0> SCORE: One node ready.
    hello, world (from node 0)
    $ 
    You will see output messages on the shell where scored was invoked, similar to the following:
    SYSLOG: Login request: user1@server.pccluster.org:32878
    SYSLOG: Login accepted: user1@server.pccluster.org:32878, JID: 1, Hosts: 1(1x1)@0, Priority: 1, Command: ./hello 
    SYSLOG: Logout: user1@server.pccluster.org:32878, JOB-ID: 1, CPU Time: 134.0[m]
    
    Execute the program on four nodes of the cluster group:
    $ scrun -scored=comp3,nodes=4 ./hello
    SCore-D 5.0.0 connected (jid=2).
    <0:0> SCORE: 4 nodes (4x1) ready.
    hello, world (from node 2)
    hello, world (from node 1)
    hello, world (from node 3)
    hello, world (from node 0)
    $ 
  8. Execute an MPICH-SCore program under the Multi-User Environment

    Execute the program on single CPU using both scrun and mpirun:
    $ export SCORE_OPTIONS=scored=comp3
    $ scrun -nodes=1 ./cpi
    SCore-D 5.0.0 connected (jid=3).
    <0:0> SCORE: One node ready.
    Process 0 of 1 on comp3.pccluster.org
    pi is approximately 3.1416009869231254, Error is 0.0000083333333323
    wall clock time = 0.000621
    $ mpirun -np 1 ./cpi
    SCore-D 5.0.0 connected (jid=4).
    <0:0> SCORE: One node ready.
    Process 0 of 1 on comp3.pccluster.org
    pi is approximately 3.1416009869231254, Error is 0.0000083333333323
    wall clock time = 0.000645
    $ 
    Execute the program on four nodes of the cluster using both scrun and mpirun:
    $ scrun -nodes=4 ./cpi
    SCore-D 5.0.0 connected (jid=5).
    <0:0> SCORE: 4 nodes (4x1) ready.
    Process 1 of 4 on comp1.pccluster.org
    Process 3 of 4 on comp3.pccluster.org
    Process 2 of 4 on comp2.pccluster.org
    Process 0 of 4 on comp0.pccluster.org
    pi is approximately 3.1416009869231245, Error is 0.0000083333333314
    wall clock time = 0.000945
    $ mpirun -np 4 ./cpi
    SCore-D 5.0.0 connected (jid=6).
    <0:0> SCORE: 4 nodes (4x1) ready.
    Process 2 of 4 on comp2.pccluster.org
    Process 1 of 4 on comp1.pccluster.org
    Process 0 of 4 on comp0.pccluster.org
    Process 3 of 4 on comp3.pccluster.org
    pi is approximately 3.1416009869231245, Error is 0.0000083333333314
    wall clock time = 0.003627
    $ 
    If you wish to test more example programs you will find them located under the /opt/score/example directory.

  9. Stop the Multi-User Environment

    In a different window execute the following command to stop scored:
    # sc_console comp3 -c shutdown
    # 
    You will see output messages on the shell where scored was invoked similar to the following:
    SYSLOG: CONSOLE connected from server.pccluster.org
    CONSOLE: >> shutdown 
    SYSLOG: SCore-D shutting down in 0 seconds.
    SYSLOG: Login disabled.
    SYSLOG: Waiting for all job terminates.
    SYSLOG: CONSOLE disconnected.
    SYSLOG: SCore-D shutdown.
    # 
    You will see the node blocks in the msgb window change back from red to blue.

    You can now exit from the scout session:
    # exit
    SCOUT: session done
    # 

PCCC logo PC Cluster Consotium

$Id: scoretest.html,v 1.4 2002/03/12 10:07:30 kameyama Exp $