How to use IMPI Relay

This document describes a step-by-step installation and test procedure of GridMPI using IMPI Relay.

($Date: 2009/03/29 15:08:16 $)


1. Installation

1.1. Prerequisite

IMPI Relay allows running GridMPI on clusters only with non-globally reachable IP addresses. It is included in GridMPI-2.0 and later. In previous releases, the every host in clusters needed to be IP global address reachable [faq].

A host which IMPI Relay is running (called relay host) must be IP reachable with not only an internal (privately addressed) cluster but also external clusters, which means a relay host must have at least one private IP address and one global IP address.

IMPI Relay is supported only Linux platforms.

For further information, please see "Overview of IMPI Relay".

1.2. Compilation and Installation

No additional procedure is required. See Installation Procedure of GridMPI.

Check the executable binary (impi-relay) in $MPIROOT/bin.

2. Starting a Program

2.1. Method 1

In this example, there are two clusters. One cluster (ID = 0) has one compute host with a global IP address (host1), and the other cluster (ID = 1) has one compute host with a private IP address (host2) and one relay host (hostR).

1. Create configuration files. Here, two localhost entries in mpi_conf1, and two localhsot entries in mpi_conf2.

Content of mpi_conf1:


Content of mpi_conf2:


2. Run an application (a.out).

host1$ export IMPI_AUTH_NONE=0		...(1)
host1$ impi-server -server 2 &		...(2)
host1$ mpirun -client 0 addr1:port1 -np 2 -c mpi_conf1 ./a.out	...(3)

hostR$ impi-relay -relay 0 addr1:port1	...(4)

host2$ export IMPI_AUTH_NONE=0		...(1)
host2$ mpirun -client 1 addr2:port2 -np 2 -c mpi_conf2 ./a.out	...(5)

(1) Set IMPI_AUTH_NONE environment variable. It specifies the authentication method of the impi-server. The value can be anything, because it is ignored.

(2) Start the impi-server. impi-server is a process to make a contact and to exchange information between MPI processes. impi-server shall be started each time, because it exits at the end of an execution of an MPI program. The -server argument specifies the number of MPI jobs (invocations of mpirun command). impi-server prints the IP address/port pair (addr1:port1) to the stdout.

(4) Start the impi-relay. This is an additional procedure to a normal GridMPI execution. As well as impi-server, impi-relay shall be started each time on a relay host. The -relay argument specifies the relay ID, and the IP address/port pair of impi-server. impi-relay prints the IP address/port pair (addr2:port2) to the stdout.

(3,5) Start MPI jobs by mpirun. The -client argument specifies the MPI job ID, and the IP address/port pair of impi-server or impi-relay. Job ID is from 0 to the number of jobs minus one to distinguish mpirun invocations. The -c option specifies the list of nodes. It starts an MPI program with NPROCS=4 (2+2).

2.2. Method 2

If the environment variables IMPI_RELAYADDR or IMPI_RELAY_RSH="#" are set, mpirun executes impi-relay at the inside.
host1$ export IMPI_AUTH_NONE=0		...(1)
host1$ impi-server -server 2 &		...(2)
host1$ mpirun -client 0 addr1:port1 -np 2 -c mpi_conf1 ./a.out	...(3)

host2$ export IMPI_AUTH_NONE=0		...(1)
host2$ export IMPI_RELAY_RSH="#"	...(4)
host2$ mpirun -client 1 addr1:port1 -np 2 -c mpi_conf2 ./a.out	...(5)

(1), (2), and (3) are same as the method 1.

(4) Set the environment variable IMPI_RELAY_RSH to "#", which means impi-relay is launched on the host executed mpirun.
You can also specify both the address and the port number of the relay host by setting the environment variables IMPI_RELAYADDR and IMPI_RELAYPORT.

(5) Pass impi-server's addr1:port1.

NOTE: In the case of the method 2, only one MPI job can be executed at the same time, since a IMPI Relay's port number is specified by the environment variable IMPI_RELAYPORT. This restriction is lifted in the GridMPI 2.0.x.

3. Port range

Generally on IMPI, ports for communication can be limited to a specified range by two environment variables: IMPI_PORT_RANGE and IMPI_SERVER_PORT_RANGE. See Environment Variables. Furthermore, IMPI relay provides IMPI_RELAY_REMPORT_RANGE to specify a range of ports to listen to for IMPI connections between clusters.


start and end are inclusive.