IceT Development Concerns

VisIt releases from 1.10 onward include support for Kenneth Moreland's IceT compositor, for improved rendering performance on medium-to-large clusters.

Development Concerns

For setup and basic usage, see Using the IceT Parallel Compositor. Users will find no useful information below.


IceT is designed to perform efficient composition for multi-headed display devices (i.e. 'tiled' display devices). However, the current VisIt architecture does not allow the engine to speak to display devices (well ...); instead, the architecture is to composite the images each engine process produces, and send the final composited image to engine process 0. Process 0 will then forward it to the viewer, which is connected to a display device (or a whole powerwall), and it is the viewer's responsibility to display the image.

A good solution might be to have all of the tiled engine processes write to the local framebuffer, and have processor 0 send back an empty placeholder image to the viewer. In that case, the viewer's job would essentially be a no-op; the image is already there (though we might need to read it from one OpenGL context and write it to whatever one the viewer has created).

There is not currently time for this implementation. As such, IceTNetworkManager::TileLayout hard codes in a single-tile layout.

Image Format

The interface as defined by NetworkManager requires the render process to result in an avtImage. We generate this image manually based on the color and depth buffers that IceT gives back.

IceT may be configured to return GL_RGBA, GL_BGRA, or GL_BGRA_EXT, according to it's documentation. Parts of VisIt pretty much assume it's getting input in GL_RGB format. You'll notice weird `slicing' in any renderer which makes this assumption. Please report any plots you see which have this behavior.

Eventually we should audit the plots to make sure they don't assume the image format...

Post Processing

Here, by 'post processing', we mean not just the image plots, but also shadowing and depth cueing. These all work in a fundamentally similar way, even though the code paths are separate.

Current Workaround

In IceT, only tile nodes have any sort of image data at the completion of the rendering process. This is in contrast to how our avtWholeImageCompositers work. There are two key assumptions that `later pipeline' (image processing) filters tend to make:

  1. Every node has an image.
  2. All nodes which have an image, have the entire image.

To workaround these issues, we composite the image to process 0, and then broadcast it out to every node. This is highly undesirable for large images (e.g. display walls), but current VisIt infrastructure necessitates this implementation.

Long term, we should reorganize shadows, depth cueing, and image post processing such that they do not take images as input. Furthermore, multipass rendering should rely on what is in the framebuffer, not a readback of the opaque rendering.