Building in parallel on a Mac

Newer MacOS X computers are being shipped with 4 to 12 cores and several Gb of memory. This makes the Mac a terrific platform for running parallel VisIt and versions of VisIt since version 2.0 have come with support for parallel. Building parallel VisIt on a Mac is straightforward as long as you are aware of which MPI installation should be used.

Mac 10.5 and 10.6

Mac OS 10.5 and 10.6 come with OpenMPI 1.2.4 installed so you don't need to build your own MPI and you can skip this section.

VisIt can be built with OpenMPI 1.2.4 on MacOS X. Other MPI implementations no doubt will work but OpenMPI has been tried and has been verified to work.

Mac 10.7 and later

These versions of MacOS X have removed MPI, forcing the VisIt project to adopt an alternative to a system-installed MPI. Since Mac 10.7, the build_visit script has supported building MPICH 3.0.2, which provides a good MPI distribution against which VisIt can be built and installed. The MPICH tools and libraries are bundled as part of parallel VisIt to ensure that parallel VisIt works as a self-contained parallel program. The build_visit script is told to build MPICH using the --mpich command line argument.

build_visit [typical arguments] --mpich

Testing MPI

These instructions assume that you have used the build_visit script to build all of the required 3rd party libraries, including MPICH, and that you are ready to build VisIt. Before building VisIt, it can be helpful to verify that the MPI installation works.

MAKE SURE THAT MPI WORKS BEFORE YOU BUILD VISIT!

A common pitfall when building parallel programs is using an MPI installation that has not been verified to work. You can try any MPI test programs that come with your MPI installation or you can write a simple hello world C program to test MPI.

#include <stdio.h>
#include <string.h>
#include <mpi.h>

int
main(int argc, char *argv[])
{
    const char *s = "HELLO FROM THE MASTER PROCESS!";
    int par_rank, par_size;
    FILE *fp = NULL;
    char msgbuf[100], filename[100];

    /* Init MPI */
    MPI_Init(&argc, &argv);
    MPI_Comm_rank(MPI_COMM_WORLD, &par_rank);
    MPI_Comm_size(MPI_COMM_WORLD, &par_size);

    msgbuf[0] = '\0';

    /* Broadcast message from master to all other processors. */
    if(par_rank == 0)
    {
        MPI_Bcast((void*)s, strlen(s)+1, MPI_CHAR, 0, MPI_COMM_WORLD);
        strcpy(msgbuf, s);
    }
    else
        MPI_Bcast((void*)msgbuf, strlen(s)+1, MPI_CHAR, 0, MPI_COMM_WORLD);

    /* Write the message from the master to a file. */
    sprintf(filename, "%s.%04d.log", argv[0], par_rank);
    if((fp = fopen(filename, "wt")) != NULL)
    {
        fprintf(fp, "Running %s with %d processors.\n", argv[0], par_size);
        fprintf(fp, "This is the log for processor %d.\n", par_rank);
        fprintf(fp, "Message: \"%s\"\n", msgbuf);
        fclose(fp);
    }

    /* Finalize MPI */
    MPI_Finalize();

    return 0;
}

The above hello world program runs on several processors and processor 0 will broadcast a message to all of the other processors, which then write the message to their log files. To compile the example program, you can use a command line similar to this:

gcc -o mpihello mpihello.c -D_REENTRANT -I/Users/bjw/Development/thirdparty_shared/2.9.0/mpich/3.0.4/i386-apple-darwin12_gcc-4.2/include \
/Users/bjw/Development/thirdparty_shared/2.9.0/mpich/3.0.4/i386-apple-darwin12_gcc-4.2/lib -lmpich

Or, better yet, you can use mpicc to compile your program:

mpicc -o mpihello mpihello.c

If the example program runs, you will have output like the following:

[dantooine:~/play/mpihello] whitlocb% mpirun -n 4 mpihello
[dantooine:~/play/mpihello] whitlocb% cat *.log
Running mpihello with 4 processors.
This is the log for processor 0.
Message: "HELLO FROM THE MASTER PROCESS!"
Running mpihello with 4 processors.
This is the log for processor 1.
Message: "HELLO FROM THE MASTER PROCESS!"
Running mpihello with 4 processors.
This is the log for processor 2.
Message: "HELLO FROM THE MASTER PROCESS!"
Running mpihello with 4 processors.
This is the log for processor 3.
Message: "HELLO FROM THE MASTER PROCESS!"

Once you are satisfied that MPI works on your system, be sure to put mpirun in your path so VisIt will be able to find it later. You add items to your path typically by editing your shell program's rc file. This is typically one of the following: ~/.cshrc, ~/.bashrc, ~/.tcshrc.

The host.cmake file

The build_visit script creates a host.cmake file where host is replaced with the name of your computer. The host.cmake file contains various variable definitions that will be inputs to VisIt's cmake build system. We recommend setting the VISIT_MPI_COMPILER variable so it contains the path to your mpic++ compiler. This will enable the build system to discover all of the relevant flags that are needed for parallel compilation. Add the following to your host.cmake file, which you'll need to place in the src/config-site directory within VisIt's source tree.

Mac 10.5 and 10.6

Mac 10.5 and 10.6 builds of VisIt that rely on the system-installed OpenMPI, can place the following lines in the host.cmake file to get the VisIt build to obtain the parallel settings from the MPI-aware C++ compiler wrapper.

##
## Add parallel arguments.
##
VISIT_OPTION_DEFAULT(VISIT_PARALLEL ON)
VISIT_OPTION_DEFAULT(VISIT_MPI_COMPILER /usr/bin/mpic++)

Mac 10.7 and later

Users on most systems at this point will need to take the approach of adding --mpich to the build_visit command line so an MPICH library will be created and used for the parallel VisIt build. The host.cmake file that the build_visit script creates will have lines that tell it where to locate the MPICH library as well as lines that inform the VisIt build of the command to use to deduce parallel library settings from the installed compiler wrapper. Users who add --mpich to the build_visit command line should not need to make any changes to the host.cmake file for parallel settings.

##
## MPICH
##

# Give VisIt information so it can install MPI into the binary distribution.
VISIT_OPTION_DEFAULT(VISIT_MPICH_DIR ${VISITHOME}/mpich/3.0.4/${VISITARCH})
VISIT_OPTION_DEFAULT(VISIT_MPICH_INSTALL ON)

# Tell VisIt the parallel compiler so it can deduce parallel flags
VISIT_OPTION_DEFAULT(VISIT_MPI_COMPILER ${VISIT_MPICH_DIR}/bin/mpicc)
VISIT_OPTION_DEFAULT(VISIT_PARALLEL ON)

Building VisIt

Now that you've changed your host.cmake file, you can build parallel VisIt with the following commands:

cd src
cmake -DCMAKE_BUILD_TYPE:STRING=Release .
make -j 4

Running VisIt

Now that MPI is up and running, just run: visit -np 4 to run in parallel with 4 processors.

Installing VisIt

When you run the make install or make package commands in the VisIt build directory, the build will copy the relevant MPI includes, libraries, and binary utilities such as mpirun or mpiexec into the VisIt installation so that it is self-contained and can be placed on other computers that do not have MPI.