About PM/SCI


The current PM/SCI is designed in order to make the Omni OpenMP environment available using the SCI network. It has a limittation in which only single user environment is available. In other words, the SCore multi-user environment is not supported for the SCI network.

PM/SCI is implemented using ScaSCI provided by Scali. You need the ScaSCI, ScaSCIadap, and ScasSCIddk environment. Because we do not provide the binary image for PM/SCI, you must build the system from the SCore distribution source.

SCore installation on the server host and compute hosts

First of all, please install the SCore system for Ethernet using the EIT tool or by hand.

PM/SCI installation on the server host

  1. Scali header file installation
    The ScaSCI, ScaSCIadp, and ScaSCIddk pakages may be downloaded from http://www.scali.com/. To install them, please issue the following commands:
        # rpm -i ScaSCI.rpm
        # rpm -i ScaSCIddk.rpm
        # rpm -i ScaSCIadap.rpm
    
  2. SCore package installation
    If you have not extracted the source files, read the Source Module Extraction page.

    To make PM/SCI, you must add following line definition in the site dependent file:

        SCASCI_HOME = /opt/scali
    

    Issue the following commands to make SCore with PM/SCI:
        # cd /opt/score/score-src/
        # echo SCASCI_HOME = /opt/scali >> adm/config/site
        # make distclean
        # ./configure --option=site
        # make
        # rsh-all -g pcc -norsh rdist -c /opt/score/deploy @host:
    

PM/SCI installation on all compute hosts

All compute hosts must be set up as follows:
  1. Scali SCI environment set up
    If the Scali SCI environment has not been set up, extract Scali SCI packages as follows:
            # rpm -i ScaSCI.rpm
            # rpm -i ScaSCIddk.rpm
            # rpm -i ScaSCIadap.rpm
    
    Then, compiling the module on all compute hosts as follows:
            # KERNEL_HEADER_BASE=/usr/src
            # KERNEL_DIR=linux-2.4
            # export KERNEL_HEADER_BASE KERNEL_DIR
            # mkdir -p ~/scascibuild
            # cd ~/scascibuild
            # /opt/scali/kernel/rebuild/configure
            # make
    
    The ScaSCI driver "Makefile" assumes the SSP environment. If you do not have the SSP environment, The ~/scascibuild/Linux/Makefile file contains the following entry.
            install:
                    scarcp libkmatch.a $K/userkv`cat uname.ver`.a 
                    scash rm -f $K/ssci.o
    
    The above entry should be replaced with the following entry:
            install:
                    cp libkmatch.a $K/userkv`cat uname.ver`.a 
                    rm -f $K/ssci.o
    
    After the modification, the ScaSCI module is installed as follows:
            # make install
    
  2. Building PM/SCI driver
    The PM/SCI driver source is located under the /opt/score/src/score-4.2.core/pm2/arch/sci/driver/linux directory. Please modify the Makefile file under the directory to specify the kernel source directory. The default value is as follows:
            KERNEL_DIR = /usr/src/linux-2.4
    
    Then, compile the driver as follows:
            # rsh-all -g pcc -norsh rdist -c /opt/score/score-src/SCore/build @host:
            # rsh-all -g pcc -norsh rdist -c /opt/score/score-src/SCore/pm2 @host:
    	# scout -g pcc
            # cd /opt/score/score-src/SCore/pm2/arch/sci/driver/linux
            # scout make
    
  3. PM/SCI driver installation
    The PM/SCI driver is installed as follows on server.
            # scout 'cp /opt/score/score-src/SCore/pm2/arch/sci/driver/linux/pm_sci.o \
    	  /lib/modules/`uname -r`/kernel/drivers/char'
    	# scout cp /opt/score/score-src/SCore/pm2/arch/sci/tools/pm_sci.rc /etc/rc.d/init.d/pm_sci
    	# scout /sbin/chkconfig --add pm_sci
    	# scout /opt/score/score-src/SCore/pm2/arch/sci/tools/mkpmscidev
    	# exit
    
After the driver installation, please reboot the system.

SCore Set up on the server host

  1. PM/SCI configuration set up
    	# cd /opt/score/score-src/SCore/pm2/arch/sci/tools
    	# ./mkpmsciconf -g pcc -set -f ./pmsci.conf
    	# mv ./pmsci.conf /opt/score/etc/pmsci.conf
    
  2. Configure the scoreaboard database
    Create the /opt/score/etc/scoreboard.db file as follows: The following entries are an example for PM/SCI where the server host name is server.pccluster.org and the compute host names are comp0.pccluster.org -- comp3.pccluster.org.
    /* PM/Ethernet */
    ethernet	type=ethernet -config:file=/opt/score/etc/pm-ethernet.conf
    /* PM/SCI */
    sci             type=sci -config:file=/opt/score/etc/pmsci.conf
    #define PCC     MSGBSERV(server.pccluster.org:8766) \
                    cpugen=pentium-iii speed=800 smp=1 \
                    network=ethernet,sci group=pcc
    comp0.pccluster.org	PCC
    comp1.pccluster.org	PCC
    comp2.pccluster.org	PCC
    comp3.pccluster.org	PCC
    
  3. reboot the server host.

Omni OpenMP on SCASH Test

To check if Omni OpenMP works on SCASH using PM/SCI, compile and run the following example programs under the /opt/omni/lib/openmp/examples/scash-test/ directory.

  1. CG

    $ cd /opt/omni/lib/openmp/examples/scash-test/cg
    $ make
    $ ./cg-makedata data
    final nonzero count in sparse number of nonzeros  =          1853104
    Fri Nov  2 17:34:48 JST 2001
    $ scout -g pcc
    $ scrun ./cg-omp data
    SCore-D 4.2 connected.
    <0:0> SCORE: 4 nodes (4x1) ready.
    read file ..
    omp_num_thread=1
    omp_max_thread=4
    read done ... 
        0    1.4483e-13     19.99975812770398
        1    1.3444e-15     17.11404957455056
        2    1.3412e-15     17.12966689461433
        3    1.3074e-15     17.13021135811924
        4    1.2853e-15     17.13023388563526
        5    1.2249e-15     17.13023498794818
        6    1.2164e-15     17.13023504989156
        7    1.1879e-15     17.13023505375095
        8    1.1349e-15     17.13023505401005
        9    1.1087e-15     17.13023505402842
       10    1.0767e-15     17.13023505402978
       11    1.1344e-15     17.13023505402988
       12    1.0106e-15     17.13023505402988
       13    9.9676e-16     17.13023505402989
       14    9.3180e-16     17.13023505402989
    time = 88.405219, 5.893681 (0.000000e+00 - 8.840522e+01)/15, NITCG=25
    $ exit
    

  2. Laplace written in C

    $ cd /opt/omni/lib/openmp/examples/scash-test/laplace
    $ make
    $ scout -g pcc
    $ scrun ./lap-omp
    SCore-D 4.2 connected.
    <0:0> SCORE: 4 nodes (4x1) ready.
    sum = 0.579817
    iter=50, time=4.40174
    $ exit
    
  3. Laplace written in F77

    $ cd /opt/omni/lib/openmp/examples/scash-test/laplace-f77
    $ make
    $ scout -g pcc
    $ scrun ./lap-omp
    SCore-D 4.2 connected.
    <0:0> SCORE: 4 nodes (4x1) ready.
     sum=  52.1801176
     time=  4.04782808
     end..
    $ exit
    


PCCC logo PC Cluster Consotium

$Id: pmsci.html,v 1.3 2002/03/08 06:24:16 hirose Exp $