The dashboard is organized as a table with a row for each submitting system. The columns represent various stats and show deltas. Each of the table entries can be clicked to access more detailed information. The 3 tracks, Nightly, Continuous and Experimental, can also be grouped. We're using the trusted label to further organize submissions. Nightly submissions are run once per day using a dated checkout, ie MM/DD/YYYY HH:MM:SS. All submitters thus grab the save revision no matter when the nightly submission is executed. Nightly runs are not incrremental, it's a completely fresh build. Continuous submissions probe the repository periodically for updates, grab them, build incrementally and run the regression suite. This gives developers quick feedback about their changes.
Ctest support is integrated in to VisIt's build system and may be enabled by setting BUILD_TESTING=ON in your builds CMakeCache.txt. After building with this option enabled invoking the ctest command from the build directory will run the tests and generate a report. Such a run could even be uploaded to the dashboard by adding the track, for example ctest -D Experimental. Nightly and continuous submissions are best automated using a CMake script. Scripts and instructions are located here.
As developers make commits watching the response in the dashboard provides a level of confidence that the commit hasn't broken key features or the build process on other platforms. Results should show up in the Continuous track any where withing a within a few minutes to an hour of the commit. Nightly runs will show up by the next morning.
The figure above shows a snapshot of the continuous section of the dashboard. We'll use the highlighted entry to point out some of the useful features. This also applies to the Nightly and Experimental tracks as well. For example we can see that there were 12 files changed which resulted in 70 new test failures. The new failures are indicated by +70 superscript in the Fail column and -70 subscript in the Pass column. Had instead there been 70 tests fixed, this would be reversed.
viewing the new failures.
The first thing we might want to do is look at the new failures. The test report will contain image diffs and stderr output either of which may be enough information for us to amend the commit. If we click on the 80 in the Test Fail column we will see all the failures not just the new one. To isolate the new failures, we click on the +70. This will bring up a list of just the test that failed as a result of our commit. From there we can click on the Failed status to see the test output, stderr, and an image difference.
viewing the source changes
We also may be interested in which files changed and their diffs. These can be conveniently accessed from the dashboard through it's websvn integration. For example clicking on the 12 in the Update Files column will bring us to a page listing the modified files. Clicking on the individual files will bring up a diff of the change.