PVM-SCore Getting Started


This document describes, The following example assumes a user environment setting described in Getting Started in the SCore User's Guide.

Limitations of running programs on current version of PVM/SCore

  1. The pvm command is not supported . (Programs using multiple binaries do not work)
  2. Multiple pvmd processes are not support on one node. (Run using -nodes=Nx1 option)
A hello program

The program below is intended to be invoked manually; after printing its task id (obtained with pvm_mytid()), it initiates a copy of another program called hello_other using the pvm_spawn() function. A successful spawn causes the program to execute a blocking receive using pvm_recv. After receiving the message, the program prints the message sent by its counterpart, as well its task id; the buffer is extracted from the message using pvm_upkstr. The final pvm_exit call dissociates the program from the PVM system.

/*
 * master.c
 */
#include "pvm3.h"
main()
{
    int  cc, tid, msgtag;
    char buf[100];
    printf("i'm t%x\n", pvm_mytid());
    cc = pvm_spawn("slave", (char**)0, 0, "", 1, &tid);
    if (cc == 1) {
        msgtag = 1;
        pvm_recv(tid, msgtag);
        pvm_upkstr(buf);
        printf("from t%x: %s\n", tid, buf);
    } else
        printf("can't start hello_other\n");
    pvm_exit();
}

The program below is a listing of the ``slave'' or spawned program; its first PVM action is to obtain the task id of the ``master'' using the pvm_parent call. This program then obtains its hostname and transmits it to the master using the three-call sequence - pvm_initsend to initialize the send buffer; pvm_pkstr to place a string, in a strongly typed and architecture-independent manner, into the send buffer; and pvm_send to transmit it to the destination process specified by ptid, ``tagging'' the message with the number 1.

/*
 * slave.c
 */
#include "pvm3.h"
main()
{
    int  ptid, msgtag;
    char buf[100];
    ptid = pvm_parent();
    strcpy(buf, "hello, world from ");
    gethostname(buf + strlen(buf), 64);
    msgtag = 1;
    pvm_initsend(PvmDataDefault);
    pvm_pkstr(buf);
    pvm_send(ptid, msgtag);
    pvm_exit();
}

Compilation

Following the standard PVM convention, all PVM applications must be located under the $HOME/pvm3/bin/`/opt/score/pvm/lib/pvmgetarch` directory. Thus, if this directory has not been created, you must create it as follows:

    % mkdir -p $HOME/pvm3/bin/`/opt/score/pvm/lib/pvmgetarch`
To generate a binary and move it to the above directory, the standard PVM distribution has the aimk tool which is a wrapper program for make. The tool set the value of make variables $(PVM_ROOT) and $(PVM_ARCH).

PVM-SCore also supports this tool. To use aimk, Makefile.aimk is needed instead of Makefile Here is a sample Makefile.aimk file:


BDIR = $(HOME)/pvm3/bin
XDIR = $(BDIR)/$(PVM_ARCH)
CFLAGS = -I$(PVM_ROOT)/include
LIBS = -lpvm3 -L$(PVM_ROOT)/lib/$(PVM_ARCH)

all: master slave

master: ../master.c $(XDIR)
        $(CC) -o $@ $(CFLAGS) ../$@.c $(LIBS)
        mv $@ $(XDIR)
slave: ../slave.c $(XDIR)
        $(CC) -o $@ $(CFLAGS) ../$@.c $(LIBS)
        mv $@ $(XDIR)

After creating the Makefile.aimk, then simply invokes the aimk tool as follows:

    % aimk all
Running a program on a cluster

You may invoke the Compute Host Lock Client, msgb(1), as follows to find some free hosts in the cluster:

$ msgb -group pcc &

pcc is a group defined in the SCore cluster database, scorehosts.db(5). Invoke the scout program with the same group name on which to run your program:

$ scout -g pcc
SCOUT: Spawn done.   
SCOUT: session started
$
A new shell process is now created as a child of the scout program. If msgb is running, some or all hosts in the msgb window will turn red when the remote processes are invoked with scout. Note that the host which the scout program is invoked doesn't have to be one of the hosts of your cluster.

The following example uses two hosts each of which has a single processor and the maximum number of processes spawned is one.

$ scrun -nodes=2x1 pvmd -e 1 master
SCORE: connected (jid=100)
<0:0> SCORE: 2 nodes (2x1) ready.
...

See also


PCCC logo PC Cluster Consotium

CREDIT
This document is a part of the SCore cluster system software developed at PC Cluster Consortium, Japan. Copyright (C) 2003 PC Cluster Consortium.