MPIRUN(1)
USER COMMANDS
MPIRUN(1)
NAME
mpirun - run MPI programs

SYNOPSIS
mpirun [options] [-score scoreoptions] [-np nodes ] mpi_program args...

mpi_program [-nodes=nodes,scoreoptions] args...

DESCRIPTION
The mpirun command runs an MPI program on the SCore environment. It is provided for portability with MPICH and other scripts that use mpirun.

The second form of the command, without mpirun, is the native SCore method of starting a program.

OPTIONS
-np nodes
Specify number of processes to be nodes. On SMP clusters, then the number of allocated hosts might be the number of nodes divided by the number of processors in the SMP. This is identical to the -score nodes=nodes option.

-np hostsxprocs
Specify the number and distribution of processes on SMP clusters. The intent is to "run a total of hosts x procs processes by distributing them over hosts physical nodes, by running procs processes per host. This is identical to the -score nodes=hostsxprocs option.

SCORE OPTIONS
These arguments are specific to MPICH-SCore. They can also be set in the SCORE_OPTIONS environment variable.

-score nodes=nodes
Specify number of processes to be nodes. On SMP clusters, then the number of allocated hosts might be the number of nodes divided by the number of processors in the SMP.

-score nodes=hostsxprocs
Specify number and distribution of processes on SMP clusters. The intent is to "run a total of hosts x procs processes by distributing them over hosts physical nodes, by running procs processes per host."

SCORE OPTIONS (ch_score)
These arguments are specific to MPICH-SCore version 1.0 (ch_score). They can also be set in the SCORE_OPTIONS environment variable.

-score mpi_zerocopy=on
Use the zero-copy/one-copy transfer (remote memory operations) when passing long messages (see the mpi_eager option below). This is a rendezvous protocol which uses the remote memory access features of PM for direct transfer between user buffers. It is only available on hardware that supports this feature of PM. This option may increase bandwidth dramatically.

-score mpi_zerocopy=off
Do not use the zero-copy/one-copy transfer (remote memory operations). This is the default.

-score mpi_eager=num
Set the message length to num bytes, where the protocol used by MPICH-SCore changes to the long message protocol. The default is 16K bytes. Messages larger than this value are sent with the rendezvous protocol.

-score mpi_globtime=on
Use the global timer. MPI_WTIME and MPI_WTICK functions returns the value of global time, and the MPI attribute key MPI_WTIME_IS_GLOBAL is set to true. It is only available on hardware that supports this feature of PM.

-score mpi_globtime=off
Do not use the global timer. This is the default.

SCORE OPTIONS (ch_score2)
These arguments are specific to MPICH-SCore version 2.0 (ch_score2). They can also be set in the SCORE_OPTIONS environment variable.

-score mpi_rma=on
Use the zero-copy/one-copy transfer (remote memory operations) when passing long messages (see the mpi_max_eager_myrinet/mpi_max_eager_shmem options below). This is a get protocol which uses the remote memory access features of PM for direct transfer between user buffers. This option may increase bandwidth dramatically. This is the default.

-score mpi_rma=off
Do not use the zero-copy/one-copy transfer (remote memory operations).

-score mpi_max_eager_myrinet=num
Set the message length to num bytes, where the protocol used by MPICH-SCore changes to the long message protocol. This value is effective to the message over Myrinet. Message larger than this value are sent with the get protocol. The default is 300K bytes.

-score mpi_max_eager_shmem=num
Set the message length to num bytes, where the protocol used by MPICH-SCore changes to the long message protocol. This value is effective to the message between processes within an SMP node. Message larger than this value are sent with the get protocol. The default is 1.2K bytes.

-score mpi_synccoll
Use the other implementation of MPI_Alltoall,MPI_Alltoallv that is included in ch_score2 code. This implementation synchronizes all MPI processes for each step of the communication. It depends on the configuration of the network and message length whether this option is effective or not.

EXAMPLES
Run the program ft.A.32 on 32 processors.

mpirun -np 32 ft.A.32

Run it again with the mpi_zerocopy=on option:

mpirun -np 32 -score mpi_zerocopy=on ft.A.32

Now you decide that you want all subsequent runs of MPI programs to use the mpi_zerocopy=on option, so you place it in the environment by appending it to the SCORE_OPTIONS environment variable. This is very handy if you have a script which uses mpirun internally (such as a benchmarking script) and you want to add special SCore options without modifying the script to handle them.

export SCORE_OPTIONS=$SCORE_OPTIONS,mpi_zerocopy=on
mpirun -np 32 ft.A.32

If you have an SMP cluster and you want to try it with two processes per host. Since you already specified mpi_zerocopy=on in the environment, it will run with two processes per host (a total of 16 hosts) and mpi_zerocopy will be enabled.

mpirun -np 16x2 ft.A.32

DEFAULT OPTIONS
The default options are -score mpi_zerocopy=off,mpi_eager=16384

SEE ALSO
environ(7), scrun(1) mpicc(1), mpif77(1), mpic++(1), mpc++(1)

CREDIT
This document is a part of the SCore cluster system software developed at PC Cluster Consortium, Japan. Copyright (C) 2003 PC Cluster Consortium.