This document describes a step-by-step installation and test procedure of GridMPI/YAMPI.
($Date: 2008/02/19 09:16:10 $)
Version 2.0 is a major milestone release.
Version 1.2 is not released widely.
Version 1.1 is a minor bug fix release.
The following lists the recommended options for the configurer. The configurer shall find default compilers in most cases, and specifying compilers is optional. Note that when specifying --with-binmode=32/64 option, do the configure; make; make install procedure twice, with calling make distclean between them. Also, do not mix --with-binmode=no and --with-binmode=32/64 options, which overrides the previously specified configuration.
Platform Compiler Configuration Notes Linux/i386 GCC ./configure (1) Linux/i386 Intel CC=icc CXX=icpc F77=ifort F90=ifort ./configure (1) Linux/x86_64 GCC ./configure --with-binmode=32
./configure --with-binmode=64(1) Linux/x86_64 Intel CC=icc CXX=icpc F77=ifort F90=ifort ./configure (1)(2) Linux/x86_64 Pathscale CC=pathcc CXX=pathCC F77=pathf90 F90=pathf90 ./configure --with-binmode=32
CC=pathcc CXX=pathCC F77=pathf90 F90=pathf90 ./configure --with-binmode=64(1) Linux/IA64 GCC ./configure IBM AIX/Power IBM XL Compilers ./configure --with-vendormpi=ibmmpi --with-binmode=32
./configure --with-vendormpi=ibmmpi --with-binmode=64Hitachi SR11K (AIX/Power) IBM XL and Hitachi F90 ./configure --with-vendormpi=ibmmpi --with-binmode=32
./configure --with-vendormpi=ibmmpi --with-binmode=64(4) Fujitsu Solaris8/SPARC64V Fujitsu CC=c99 CXX=FCC F77=frt F90=f90 ./configure --with-vendormpi=fjmpi --with-binmode=32
CC=c99 CXX=FCC F77=frt F90=f90 ./configure --with-vendormpi=fjmpi --with-binmode=64Solaris10/SPARC Sun (SUN Studio11) ./configure --with-binmode=32
./configure --with-binmode=64NEC SX/Super-UX NEC CC ./configure --target=sx6-nec-superux14.1 --host=sx6-nec-superux14.1 (5) MacOS X/IA32 and MacOS X/PowerPC GCC ./configure (6)
GridMPI/YAMPI is tested with RedHat 9, FedoraCore 3 and 5 for IA32(i386) machines with GNU GCC. It is also tested with SuSE SLES 9 on x86_64, and with RedHat Advanced Server 2 on IA64. Also, it is partially tested with Intel Compilers.
GridMPI/YAMPI needs following non-standard commands to compile.
- makedepend
makedepnd is in the "xorg-x11-devel" RPM package in RedHat or Fedora Core.
NOTE: The Myrinet MX support is experimental in GridMPI-2.0. It needs further tuning.
Set $MPIROOT to the installation directory, and add $MPIROOT/bin in the PATH.
Commands and libraries are installed and searched in $MPIROOT/bin, $MPIROOT/include and $MPIROOT/lib. Note that it does NOT understand shell's "~" notation. Add settings in ".profile" or ".cshrc", etc.
# Example assumes /opt/gridmpi as MPIROOT (For sh/bash) $ MPIROOT=/opt/gridmpi; export MPIROOT $ PATH="$MPIROOT/bin:$PATH"; export PATH (For csh/tcsh) % setenv MPIROOT /opt/gridmpi % set path=($MPIROOT/bin $path)
Unpack the source in an appropriate directory.
In the following, files are expanded under the $HOME directory. The source expands in the gridmpi-2.x directory.
$ cd $HOME $ tar zxvf gridmpi-2.x.tar.gz
The contents are:
README: README file NOTICE: license notification LICENSE: The Apache License RELEASENOTE: major changes (incomplete) checkpoint: source of checkpointing package configure: configuration script yampii: source of YAMPI (PC cluster MPI) src: source of GridMPI man: few manual pages
Simply do the following:
$ cd $HOME/gridmpi-2.x ...(1) $ ./configure ...(2) $ make ...(3) $ make install ...(4)
(1) Move to the source directory.
(2) Invoke the configurer. No options suffice for Linux cluster settings.
Check the configuration output. Note that the configure runs twice: the first run is for GridMPI, and the second run is for YAMPI. The configurer of GridMPI calls the configurer of YAMPI inside.
The below shows the typical output (it was run on x86_64):
Configuration (output from configure)
Configuration MPIROOT /opt/gridmpi --enable-debug no --enable-pmpi-profiling yes --with-binmode no --with-binmode-default --enable-threads yes --enable-signal-onesided no --enable-mem yes --with-score no --with-mx no --with-openib no --with-vendormpi no --with-libckpt no --with-libpsp no --enable-dlload yes Configuration MPIROOT /opt/gridmpi --enable-debug no --enable-pmpi-profiling yes --with-binmode no --with-binmode-default --enable-threads yes --enable-signal-onesided no --enable-mem yes --with-score no --with-mx no --with-openib no --with-gridmpi yes --with-vendormpi no --with-libckpt no --with-libckpt-includedir no --with-libckpt-libdir no --enable-dlload yes
(3) Make.
(4) Install.
Files are installed in $MPIROOT/bin, $MPIROOT/include, and $MPIROOT/lib.
See FAQ to use a C complier other than the default one. [FAQ]
Check the files in the installation directories.
(1) In $MPIROOT/bin,
mpicc, mpif77, mpic++, mpif90, mpirun, gridmpirun, impi-server, mpifork, nsd, canquit, detach (and some utility shell scripts)
(2) In $MPIROOT/include,
mpi.h, mpif.h, mpi-1.h, mpi-2.h mpic++.h
(3) In $MPIROOT/lib,
libmpi.a libmpif.a
(4) Check the commands are in the path.
$ which mpicc $ which mpif77 $ which mpirun
Compile pi.c in the src/test/basic directory.
(1) Compile a test program.
$ cd $HOME/gridmpi-2.x/src/test/basic/ $ mpicc pi.c
See FAQ to change the default compiler. [FAQ]
(1) Create a configuration file.
Content of mpi_conf:
localhost localhost
(2) Run an application (a.out) as a cluster MPI.
$ mpirun -np 2 ./a.out
In this case, GridMPI does not use wide-area communication, and is a cluster configuration using YAMPI.
The remote shell command can be changed. The default is ssh. Set the environment variable _YAMPI_RSH to use rsh or to pass options to the remote shell command.
(For sh/bash) $ _YAMPI_RSH="ssh -x"; export _YAMPI_RSH (For csh/tcsh) % setenv _YAMPI_RSH "ssh -x"
Setting of ssh should be with no password. Refer to the FAQ to use ssh-agent. [FAQ]
(1) Create configuration files. Here, two localhost entires in mpi_conf1, and two localhost entries in mpi_conf2.
Content of mpi_conf1:
localhost localhost
Content of mpi_conf2:
localhost localhost
(2) Run an application (a.out).
$ export IMPI_AUTH_NONE=0 ...(1) $ impi-server -server 2 & ...(2) $ mpirun -client 0 addr:port -np 2 -c mpi_conf1 ./a.out & ...(3) $ mpirun -client 1 addr:port -np 2 -c mpi_conf2 ./a.out ...(4)
(1) Set IMPI_AUTH_NONE environment variable. It specifies the authentication method of the impi-server. The value can be anything, because it is ignored.
(2) Start the impi-server. impi-server is a process to make a contact and to exchange information between MPI processes. impi-server shall be started each time, because it exits at the end of an execution of an MPI program. The -server argument specifies the number of MPI jobs (invocations of mpirun command). impi-server prints the IP address/port pair to the stdout.
(3,4) Start MPI jobs by mpirun. The -client argument specifies the MPI job ID, and the IP address/port pair of impi-server. Job ID is from 0 to the number of jobs minus one to distinguish mpirun invocations. The -c option specifies the list of nodes. It starts an MPI program with NPROCS=4 (2+2).
GridMPI can use a vendor supplied MPI as an underlying communication layer as a "Vendor MPI". It is necessary to specify options to the configurer to use Vendor MPI. GridMPI supports IBM-MPI (on IBM P-Series and Hitachi SR11000) as a Vendor MPI.
GridMPI/YAMPI needs following (non-standard) commands to compile.
- gmake (GNU make) - makedepend - cc_r and xlc_r - IBM-MPI library (assumed to be in /usr/lpp/ppe.poe/lib)
GridMPI/YAMPI uses xlc_r to compile the source code. MPI applications can be compiled with cc_r, xlf_r, and Hitachi f90.
When the IBM-MPI library is not installed in the directory /usr/lpp/ppe.poe/lib, it is necessary to specify its location by MP_PREFIX (it is needed in both installation and use time). The MP_PREFIX environment variable is specified by the IBM-MPI.
See Installation on Linux Clusters. [jump]
See Installation on Linux Clusters. [jump]
The procedure slightly differs from the Linux Clusters case in specifying --with-vendormpi to the configurer and in using gmake to compile.
$ cd $HOME/gridmpi-2.x ...(1) $ ./configure --with-vendormpi=ibmmpi --with-binmode=32 ...(2) $ gmake ...(3) $ gmake install ...(4) $ gmake distclean $ ./configure --with-vendormpi=ibmmpi --with-binmode=64 ...(2) $ gmake ...(3) $ gmake install ...(4)
(1) Move to the source directory.
(2) Invoke the configurer.
The --with-vendormpi=ibmmpi specifies to use Vendor MPI.
The --with-binmode=32/64 specifies binary mode. Use --with-binmode=no to use a compiler default mode (or when the compiler does not support options to control the mode). Use --with-binmode=32/64 to use both modes. The configure-make-install procedure shall be performed twice, once for 32bit mode and once for 64bit mode. Also specify -q32/-q64 to mpicc at compiling applications. Do not forget gmake distclean between two runs of configure.
Check the configuration output. Note that the configure runs twice: the first run is for GridMPI, and the second run is for YAMPI. The configurer of GridMPI calls the configurer of YAMPI inside.
Check that --with-vendormpi is ibmmpi.
Configuration (output from configure)
Configuration MPIROOT /opt/gridmpi --enable-debug no --enable-pmpi-profiling yes --with-binmode 32/64 --with-binmode-default --enable-threads yes --enable-signal-onesided no --enable-mem yes --with-score no --with-mx no --with-openib no --with-vendormpi ibmmpi --with-libckpt no --with-libpsp no --enable-dlload yes Configuration MPIROOT /opt/gridmpi --enable-debug no --enable-pmpi-profiling yes --with-binmode 32/64 --with-binmode-default --enable-threads yes --enable-signal-onesided no --enable-mem yes --with-score no --with-mx no --with-openib no --with-gridmpi yes --with-vendormpi ibmmpi --with-libckpt no --with-libckpt-includedir no --with-libckpt-libdir no --enable-dlload yes
(3) Make with gmake.
(4) Install.
Files are installed in $MPIROOT/bin, $MPIROOT/include, and $MPIROOT/lib.
NOTE: gmake distclean is necessary to clean the all configuration state when compiling with a different configuration. Also note that it removes all Makefiles.
Check the files in the installation directories.
(1) In $MPIROOT/bin,
mpicc, mpif77, mpic++, mpif90, mpirun, gridmpirun, impi-server, mpifork, nsd, canquit, detach (and some utility shell scripts)
(2) In $MPIROOT/include,
mpi.h, mpif.h, mpi-1.h, mpi-2.h mpic++.h
(3) In $MPIROOT/lib,
libmpi32.a (--with-binmode=32 case) libmpi64.a (--with-binmode=64 case)
(4) Check the commands are in the path.
$ which mpicc $ which mpif77 $ which mpirun
Compile pi.c in the src/test/basic directory.
(1) Compile a test program.
$ cd $HOME/gridmpi-2.x/src/test/basic/ $ mpicc -q32 -O3 pi.c (for 32bit binary) $ mpicc -q64 -O3 pi.c (for 64bit binary)
In the IBM-MPI environment, IBM POE (Parallel Operating Environment) is used to start MPI processes. Nodes are specified by a file host.list in POE by default. In using POE with the LoadLeveler, a batch command file llfile is needed.
(1) Create a configuration file.
Content of host.list:
node00 node01
Content of llfile:
#@job_type=parallel #@resources=ConsumableCpus(1) #@queue
(2) Run an application (a.out) as a cluster MPI.
$ mpirun -np 2 ./a.out -llfile llfile
NOTE: In no LoadLeveler environment, the specification of -llfile llfile is not necessary.
mpirun calls poe command inside to start MPI processes in the IBM POE. In the process, the -c argument is renamed with the -hostfile argument for poe command.
(1) Create configuration files. Here, two node00 entries in host1.list, and two node01 entries in host2.list.
Content of host1.list:
node00 node00
Content of host2.list:
node01 node01
Content of llfile:
#@job_type=parallel #@resources=ConsumableCpus(1) #@queue
(2) Run an application (a.out).
$ export IMPI_AUTH_NONE=0 $ impi-server -server 2 & $ mpirun -client 0 addr:port -np 2 -c host1.list ./a.out -llfile llfile & $ mpirun -client 1 addr:port -np 2 -c host2.list ./a.out -llfile llfile
NOTE: In no LoadLeveler environment, the specification of -llfile llfile is not necessary.
See Installation on Linux Clusters for descriptions. [jump]
GridMPI supports Fujitsu MPI and Fujitsu compilers in Solaris8 (Fujitsu PrimePower Series).
GridMPI/YAMPI needs following (non-standard) commands to compile.
- Fujitsu c99/f90 - Fujitsu MPI (Parallelnavi) - gmake (GNU make) - makedepend (in /usr/openwin/bin)
The configurer assumes the Fujitsu compilers are installed in directory /opt/FSUNf90, and the Fujitsu MPI in /opt/FJSVmpi2 and /opt/FSUNaprun. GridMPI/YAMPI uses Fujitsu c99 to compile the source code.
# Example assumes /opt/gridmpi as MPIROOT (For sh/bash) $ MPIROOT=/opt/gridmpi; export MPIROOT $ PATH="$MPIROOT/bin:/opt/FSUNf90/bin:/opt/FSUNaprun/bin:/usr/ccs/bin:$PATH"; export PATH $ LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/opt/FSUNf90/lib:/opt/FJSVmpi2/lib:\ /opt/FSUNaprun/lib"; export LD_LIBRARY_PATH $ LD_LIBRARY_PATH_64="$LD_LIBRARY_PATH_64:/opt/FSUNf90/lib/sparcv9:\ /opt/FJSVmpi2/lib/sparcv9:/opt/FSUNaprun/lib/sparcv9:\ /usr/ucblib/sparcv9:/usr/lib/sparcv9"; export LD_LIBRARY_PATH_64 (For csh/tcsh) % setenv MPIROOT /opt/gridmpi % set path=($MPIROOT/bin /opt/FSUNf90/bin /opt/FSUNaprun/bin /usr/ccs/bin $path) % setenv LD_LIBRARY_PATH "${LD_LIBRARY_PATH}:/opt/FSUNf90/lib:/opt/FJSVmpi2/lib:\ /opt/FSUNaprun/lib" % setenv LD_LIBRARY_PATH_64 "${LD_LIBRARY_PATH_64}:/opt/FSUNf90/lib/sparcv9:\ /opt/FJSVmpi2/lib/sparcv9:/opt/FSUNaprun/lib/sparcv9:\ /usr/ucblib/sparcv9:/usr/lib/sparcv9"
See Installation on Linux Clusters. [jump]
The procedure slightly differs from the Linux Clusters case in specifying --with-vendormpi to the configurer and in using gmake to compile.
$ cd $HOME/gridmpi-2.x ...(1) $ CC=c99 CXX=FCC F77=frt F90=f90 ./configure --with-vendormpi=fjmpi --with-binmode=32 ...(2) $ gmake ...(3) $ gmake install ...(4) $ gmake distclean $ CC=c99 CXX=FCC F77=frt F90=f90 ./configure --with-vendormpi=fjmpi --with-binmode=64 ...(2) $ gmake ...(3) $ gmake install ...(4)
(1) Move to the source directory.
(2) Invoke the configurer.
The --with-vendormpi=fjmpi specifies to use Vendor MPI.
The --with-binmode=no/32/64 specifies binary mode. Use --with-binmode=no to use a compiler default mode (or when the compiler does not support options to control the mode). Use --with-binmode=32/64 to use both modes. The configure-make-install procedure shall be performed twice, once for 32bit mode and once for 64bit mode. Also specify -q32/-q64 to mpicc at compiling applications. Do not forget gmake distclean between two runs of configure.
Check the configuration output. Note that the configure runs twice: the first run is for GridMPI, and the second run is for YAMPI. The configurer of GridMPI calls the configurer of YAMPI inside.
Check that --with-vendormpi is fjmpi.
Configuration (output from configure)
Configuration Configuration MPIROOT /opt/gridmpi --enable-debug no --enable-pmpi-profiling yes --with-binmode 32/64 --with-binmode-default --enable-threads yes --enable-signal-onesided no --enable-mem yes --with-score no --with-mx no --with-openib no --with-vendormpi fjmpi --with-libckpt no --with-libpsp no --enable-dlload yes Configuration MPIROOT /opt/gridmpi --enable-debug no --enable-pmpi-profiling yes --with-binmode 32/64 --with-binmode-default --enable-threads yes --enable-signal-onesided no --enable-mem yes --with-score no --with-mx no --with-openib no --with-gridmpi yes --with-vendormpi fjmpi --with-libckpt no --with-libckpt-includedir no --with-libckpt-libdir no --enable-dlload yes
(3) Make with gmake.
(4) Install.
Files are installed in $MPIROOT/bin, $MPIROOT/include, and $MPIROOT/lib.
NOTE: gmake distclean is necessary to clean the all configuration state when compiling with a different configuration. Also note that it removes all Makefiles.
Check the files in the installation directories.
(1) In $MPIROOT/bin,
mpicc, mpif77, mpic++, mpif90, mpirun, gridmpirun, impi-server, mpifork, nsd, canquit, detach (and some utility shell scripts)
(2) In $MPIROOT/include,
mpi.h, mpif.h, mpi-1.h, mpi-2.h mpic++.h
(3) In $MPIROOT/lib,
libmpi32.so libmpi_frt32.a libmpi_gmpi32.so (--with-binmode=32 case) libmpi64.so libmpi_frt64.a libmpi_gmpi64.so (--with-binmode=64 case)
(4) Check the commands are in the path.
$ which mpicc $ which mpif77 $ which mpirun
Compile pi.c in the src/test/basic directory.
(1) Compile a test program.
$ cd $HOME/gridmpi-2.x/src/test/basic/ $ mpicc -q32 -Kfast pi.c (for 32bit binary) $ mpicc -q64 -Kfast -KV9 pi.c (for 64bit binary)
In the Fujitsu MPI environment, MPI runtime /opt/FJSVmpi2/bin/mpiexec is used to start MPI processes. Options to mpirun are translated and passed to Fujitsu MPI mpiexec: -np to -n and -c to -nl. The contents of the host configuration file specified by -c is traslated to the nodelist format of -nl, and the configuration file should be a list of the host names (a host per line, comments not allowed).
(1) Run an application (a.out) as a cluster MPI.
Configuration file is not needed. Fujitsu MPI automatically configures for available nodes.
$ mpirun -np 2 ./a.out
mpirun accepts the node list option -nl when configured with Fujitsu MPI. For example, only node zero should be used in the run, pass a list of zeros to the -nl option (one more zeros is needed to the number to the MPI processes which is assigned to the daemon process).
mpirun -np 4 -nl 0,0,0,0,0
mpirun accepts the -c option, which is intended for using with NODELIST of PBS Pro.
(1) Run an application (a.out).
$ export IMPI_AUTH_NONE=0 $ impi-server -server 2 & $ mpirun -client 0 addr:port -np 2 ./a.out & $ mpirun -client 1 addr:port -np 2 ./a.out
GridMPI/YAMPI only partially supports NEC SX with Super-UX. It does not support NEC MPI as a vendor MPI. It does not support shared memory communication, and uses sockets inside a node.
Running GridMPI/YAMPI on SX is very restricted by resource limitation. SX only allows very small number of socket connections; we observed the limit of running six processes on a node by the default setting. The number of sockets is limited to 255 as of the default parameter setting.
GridMPI/YAMPI is only tested lightly on a single processor SX6i, with SUPER-UX 17.1, and the cross compiling environment crosskit r171.
GridMPI/YAMPI needs the cross compiling environment (crosskit) on Linux.
SX cross compiler commands:
- sxcc, sxas, sxar, and others.
GridMPI/YAMPI also needs an optional package for IPv6 support. It includes files netdb.h, netinet/in.h, stdint.h, and stdio.h in /SX/opt/include; and libsxnet.a in /SX/opt/lib/. The package is unnamed and may not be available commercially. Please ask sales person of NEC if configure fails with a message like following:
Optional package (libsxnet.a) is required: /SX/opt/lib/libsxnet.a is not found.
# Example assumes /opt/gridmpi as MPIROOT (For sh/bash) $ MPIROOT=/opt/gridmpi; export MPIROOT $ SX_BASE_CPLUS=/SX/opt/sxc++/CC; export SX_BASE_CPLUS $ PATH="$MPIROOT/bin:/SX/opt/sxc++/inst/bin:/SX/opt/sxf90/inst/bin:\ /SX/opt/crosskit/inst/bin:$PATH"; export PATH (For csh/tcsh) % setenv MPIROOT /opt/gridmpi % setenv SX_BASE_CPLUS /SX/opt/sxc++/CC % set path=($MPIROOT/bin /SX/opt/sxc++/inst/bin /SX/opt/sxf90/inst/bin \ /SX/opt/crosskit/inst/bin $path)
See Installation on Linux Clusters. [jump]
The procedure slightly differs from the Linux Clusters case in specifying --target and --host for cross compiling.
$ cd $HOME/gridmpi-2.x ...(1) $ ./configure --target=sx6-nec-superux17.1 --host=sx6-nec-superux17.1 ...(2) $ make ...(3) $ make install ...(4)
(1) Move to the source directory.
(2) Invoke the configurer.
Check the configuration output. Note that the configure runs twice: the first run is for GridMPI, and the second run is for YAMPI. The configurer of GridMPI calls the configurer of YAMPI inside.
Configuration (output from configure)
Configuration MPIROOT /opt/gridmpi --enable-debug no --enable-pmpi-profiling no --with-binmode no --with-binmode-default --enable-threads yes --enable-signal-onesided no --with-score no --with-mx no --with-openib no --with-vendormpi no --with-libckpt no --with-libpsp no --enable-dlload no Configuration MPIROOT /opt/gridmpi --enable-debug no --enable-pmpi-profiling no --with-binmode no --with-binmode-default --enable-threads yes --enable-signal-onesided no --with-score no --with-mx no --with-openib no --with-gridmpi yes --with-vendormpi no --with-libckpt no --with-libckpt-includedir no --with-libckpt-libdir no --enable-dlload no
(3) Make with make.
(4) Install.
Files are installed in $MPIROOT/bin, $MPIROOT/include, and $MPIROOT/lib.
Check the files in the installation directories.
(1) In $MPIROOT/bin,
mpicc, mpif77, mpic++, mpif90, mpirun, gridmpirun, impi-server, mpifork, nsd, canquit, detach (and some utility shell scripts)
(2) In $MPIROOT/include,
mpi.h, mpif.h, mpi-1.h, mpi-2.h mpic++.h
(3) In $MPIROOT/lib,
libmpi.a libmpif.a
(4) Check the commands are in the path.
$ which mpicc $ which mpif77 $ which mpirun
Compile pi.c in the src/test/basic directory.
(1) Compile a test program.
$ cd $HOME/gridmpi-2.x/src/test/basic/ $ mpicc pi.c
The way mpirun command starts MPI processes is the same as the Linux Clusters case. However, there are two slightly different ways to start MPI processes.
One way is to run mpirun on SX itself. In this case, set _YAMPI_RSH environment variable as # (one "sharp" character), when SX does not allow a remote shell running on itself. The sharp-sign instructs mpirun to simply fork, instead of to use a remote shell.
$ _YAMPI_RSH='#'; export _YAMPI_RSH
The other way is to run mpirun on the Linux front-end. In this case, set the _YAMPI_RSH environment variable as rsh, when SX only allows rsh as a remote shell.
$ _YAMPI_RSH=rsh; export _YAMPI_RSH $ _YAMPI_MPIFORK=/opt/gridmpi/bin/mpifork; export _YAMPI_MPIFORK $ _YAMPI_MPIRUN_NOFULLPATH=1; export _YAMPI_MPIRUN_NOFULLPATH $ _YAMPI_MPIRUN_SPREAD=255; export _YAMPI_MPIRUN_SPREAD $ _YAMPI_MPIRUN_CHDIR=0; export _YAMPI_MPIRUN_CHDIR $ _YAMPI_MPIRUN_RLIMIT=0; export _YAMPI_MPIRUN_RLIMIT
Note: Set _YAMPI_MPIFORK to a command runnable on the Linux front-end, and set _YAMPI_MPIRUN_NOFULLPATH=1 to use just "mpifork" on the remote nodes (thus use one found in the path), otherwise, the same one as _YAMPI_MPIFORK is used on the remote nodes. Setting _YAMPI_MPIRUN_CHDIR=0 disables chdir, and setting _YAMPI_MPIRUN_RLIMIT=0 disables setrlimit on the remote nodes; set them if needed.
(1) Run an application (a.out) as a cluster MPI.
Configuration file is mpi_conf in the current directory.
$ mpirun -np 2 ./a.out
If SX would claim shortage of socket buffers, set _YAMPI_SOCBUF and IMPI_SOCBUF to zero to use the system default. GridMPI/YAMPI sets the socket buffer size to 64K bytes by default, which would seem to be fairly large for the Super-UX default.
(1) Run an application (a.out).
$ export IMPI_AUTH_NONE=0 $ impi-server -server 2 & $ mpirun -client 0 addr:port -np 2 ./a.out & $ mpirun -client 1 addr:port -np 2 ./a.out
SCore is a Linux clustering package developed and distributed by the PC Cluster Consortium [http://www.pccluster.org/]. PM is the fast and abstract messaging library of SCore. GridMPI/YAMPI supports PM, which in turn supports many fast communication hardware, such as Myrinet, InfiniBand, and Ethernet (fast non-TCP/IP protocol), and also supports one-copy shared memory communication.
Specify --with-score to the configurer to use PM on SCore.
$ ./configure --with-score
GridMPI/YAMPI also directly supports Myrinet MX, not via PM on SCore.
Specify --with-mx to the configurer to use MX.
$ ./configure --with-mx
GridMPI/YAMPI should work on Solaris 10 and later, although it is not fully tested. It can be installed and used very much like Linux clusters. The exceptions are:
GridMPI/YAMPI should work in MacOS X (Darwin 8.x), although it is not fully tested. It can be installed and used very much like Linux clusters. The exceptions are:
GridMPI/YAMPI should work in FreeBSD, although it is not fully tested. It can be installed and used very much like Linux clusters.
PGI 6.x or earlier does not supoort needed features of ISO C99.
While PGI compiler cannot be used to compile GridMPI/YAMPI, PGI C, C++, and Fortran compilers can be used to compile applications. GridMPI/YAMPI includes variations of Fortran symbols, and it can be linked to the code compiled with PGI Fortran compiler even when GridMPI/YAMPI is compiled with GCC. It does not need reconfiguration and recompilation of GridMPI/YAMPI.
However, it needs some setup to tell the compiler driver mpicc about the Fortran compiler and its options. The environment variables _YAMPI_F77, _YAMPI_EXTFOPT and others can be used to override the default setting.
In running a single cluster MPI, processes are started by mpirun (without the -client option). It is just a cluster MPI, YAMPI, and the YAMPI protocol is used in communication.
In running a multiple-cluster MPI, processes are started by mpirun -client n ip-address-port for each MPI job. Multiply invoked MPI jobs join by connecting to an impi-server process.
The impi-server is a process to exchange information of processes (e.g., IP address/port pairs of the processes) from multiple MPI invocations. It does nothing after exchanging information until joining processes in MPI_Finalize.
In multiple-cluster MPI, the YAMPI protocol is used for intra-cluster communication, and the IMPI (Interoperable MPI) protocol is used for inter-cluster communication. Multiply started MPI processes receives their rank with regard to the client number. The lowest ranks are assigned to the processes started with mpirun -client 0 and the next lowest are to processes started with mpirun -client 1, and so on.
IMPI Protocol +---------+===================+---------+ | | | | +-----|---------|-----+ +-----|---------|-----+ +--------+ | +-------+ +-------+ | | +-------+ +-------+ | | impi- | | | rank0 | | rank1 | | | | rank2 | | rank3 | | | server | | +-------+ +-------+ | | +-------+ +-------+ | +--------+ | | | | | | | | | +=========+ | | +=========+ | | YAMPI Protocol | | YAMPI Protocol | +---------------------+ +---------------------+ mpirun -client 0 mpirun -client 1