# $PCCC_Release SCore Release 7.0.2.0 of SCore Cluster System Software (2011/11/14) $ # $PCCC_Copyright # SCore Cluster System Software version 7 # Copyright (C) 2003-2011 PC Cluster Consortium # # This software is free software; you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License version # 2.1 published by the Free Software Foundation. # # This software is distributed in the hope that it will be useful, but # WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public # License along with this software; if not, write to the Free Software # Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 # USA # # NOTICE: # SCore Cluster System Software versions 1 through 4 # Copyright (c) 2001,2000,1999,1998,1997 # Real World Computing Partnership # The SCore Cluster System Software copyright was transferred to # the PC Cluster Consortium from Real World Computing Partnership. # $ SETUP SCORE7-BETA2 1. Setup ssh (or rsh) Setup the server host and compute hosts so that you can remote-login to the compute host from the server host. If you use rsh, please create hosts.equiv files on the compute hosts. If you use ssh, please use ssh-agent or null passphrases. And please set SCORE_RSH to remote-login program. 2. Setup NFS Export home directories (server:/home/) to the compute hosts and NFS-mount the directories on the same path on every compute host. If you build SCore from source code, export the server:/opt/score directory NFS-mount the directory on the same path on every compute host. Or resursively copy the entire directory server:/opt/score onto the all compute host. 3. Create the Machine file List the names of the compute hosts into a file. The hostnames should be line separated. Here is an example. server# cat machinefile comp00.pccluster.org comp01.pccluster.org comp02.pccluster.org ... server# 4. Reboot every compute hosts 5. Wait for booting up The sceptic command is useful to check if the all compute host is boot up and running. server# bash server# export SCORE_RSH=ssh server# . /etc/profile.d/score.sh server# sceptic -g machinefile -v comp2.pccluster.org: OK comp1.pccluster.org: OK comp0.pccluster.org: OK comp3.pccluster.org: OK comp5.pccluster.org: OK comp7.pccluster.org: OK comp6.pccluster.org: OK comp4.pccluster.org: OK As in the above example, the "OK" message indicates the host is up and running. 6. Creating SCore database Firstly check if the created machinefile is correct and if the SCore programs is installed properly. server# bash server# . /etc/profile.d/score.sh server# rsh-all -q -s -P -g machinefile uptime 2> /dev/null Here you may get something like the following output. 20:37:19 up 4 days, 3:02, 0 users, load average: 0.00, 0.00, 0.00 20:37:19 up 4 days, 3:02, 0 users, load average: 0.00, 0.00, 0.00 20:37:19 up 6:30, 0 users, load average: 0.00, 0.00, 0.00 20:37:19 up 3 days, 4:11, 0 users, load average: 0.00, 0.00, 0.00 And if the number of the line equal to the number of hosts listed in the mcahine file, then ssh/rsh setup and the machine file looks ok. Then create the SCore configuration file by the following command. server# bash server# rsh-all -q -s -P -g machinefile /opt/score/sbin/scbdrec 2> \ /dev/null > /opt/score/etc/scorehosts.db 7. Reboot Reboot server host. 8. MPI Hello Here is the simple but famous hello program written with MPI. $ cat hello.c #include #include int main(int argc, char **argv) { char name[MPI_MAX_PROCESSOR_NAME]; int nprocs, procno, len; MPI_Init( &argc, &argv ); MPI_Comm_size( MPI_COMM_WORLD, &nprocs ); MPI_Comm_rank( MPI_COMM_WORLD, &procno ); MPI_Get_processor_name( name, &len ); name[len] = '\0'; printf( "Hello !! from %s@%d/%d\n", name, procno, nprocs ); MPI_Barrier( MPI_COMM_WORLD ); MPI_Finalize(); return( 0 ); } $ mpicc hello.c $ scrun -group=machinefile ./a.out SCore (7.0.0) Connected SCORE{1} 8 nodes (2x4) ready. Hello !! from comp0.pccluster.org@0/8 Hello !! from comp0.pccluster.org@1/8 Hello !! from comp1.pccluster.org@4/8 Hello !! from comp0.pccluster.org@2/8 Hello !! from comp1.pccluster.org@5/8 ... (more messages follow) Congratulations !! Your SCore system is now successfully installed. SCore has its own man pages and the "scorer" command will show you the man pages. If you want know more about the scrun command, do the following. host$ scorer 1 scrun scrun(1) SCore1 scrun(1) NAME scrun - execute (submit) an SCore parallel job SYNOPSIS scrun [OPTIONS] JOB ... (more messages follow) If you are interested in the scheduled full features of SCore7, please take a look at RELEASE-NOTE.txt Finally, if you have some problems with the SCore installation send us an e-mail score-users@pccluster.org or consult our web pages http://www.pccluster.org/