Parallel IO strategies
The number of domains affects how work is distributed to processors when running in parallel. There are different strategies that database reader plugins use to decompose data into domains that VisIt can use in parallel. This page describes the domain decomposition approaches taken by VisIt's database reader plugins.
Predefined static decomposition
Database plugins are usually either single-domain "SD" or multiple-domain "MD". SD readers can be assigned 1 domain/reader per processor to achieve parallelism. MD readers can return multiple domains and the number of domains is distributed across processors. Many MD readers don't really need to know anything about parallelism; they just need to know how to read one domain from a dataset that has already been split into multiple domains by the simulation code. For most file formats, VisIt piggybacks the domain decomposition already used in the data. If there are several domains in the data then VisIt can assign domains to different processors when running in parallel. These readers don't typically further subdivide the domains (but they could) that are returned to VisIt so you won't get parallel speedup past the number of domains in the dataset.
Here is a list of the MD readers:
On the fly static decomposition
When data are written as one large file with a single domain, some readers are smart enough to break up the data into several domains on the fly. This can be accomplished by either telling VisIt that the dataset is really made up of N domains or by telling it that the reader can do automatic domain decomposition. In the former approach, the number of domains is determined by dividing the dataset up into some number of domains with a target number of zones per domain. This way can serve up N arbitrarily-sized domains to VisIt and they are treated as separate domains in VisIt. This can be easier for regular sized subdomains since you can have a static decomposition and don't have to worry about how processor count affects the decomposition. I've found that I can sometimes put many, many small domains through the pipeline where I couldn't necessarily do so with larger domains without running out of memory.
Readers that do this:
On the fly dynamic decomposition
Another way to do parallelism is to dynamically divide the data into chunks or slabs on the fly, based on the number of processors you have. This method ensures that all processors get data and it only reports 1 domain to VisIt but all ranks read their portion of that 1 domain. This method is indicated by setting SetFormatCanDoDomainDecomposition(true) in the metadata returned from a plugin's PopulateDatabaseMetaData method. The reader then has to take care to create the mesh, read the variable, and do ghosting appropriately for the number of processors involved in the read. In my opinion, this can be a little trickier to implement correctly.
Readers that do this: