Visit-tutorial-Advanced-movie-making

From VisItusers.org

Jump to: navigation, search

Making movies with VisIt runs the gamut from creating a simple movie that shows the time evolution of a simulation to movies that contain multiple image sequences, where the image sequences may contain:

  1. Titles
  2. Fade-ins
  3. Image sequences that involve moving the camera around or through the data.
  4. Image sequences where each image contains multiple components such as a 3d view of the data and a curve showing the time evolution of a value.
  5. Image sequences where operator attributes are modified such as animating a slice plane moving through a data set.

This tutorial will focus on advanced movie making using scripting and will incorporate elements 1, 2, and 4 from the above list. When you are finished with this tutorial you will have created the following animation.

Play the animation

Image:Visit_tutorial_composited_image.png

Contents

[edit] Overview of the elements of the movie

The elements of the title slide are identified with red text.

Image:Visit_tutorial_title_slide_elements.png

The elements of the time animation are identified with red text.

Image:Visit_tutorial_time_animation_elements.png

[edit] Creating the movie

This section of the tutorial takes you through the steps to create the movie; from developing a workflow, to step by step instructions for creating the individual images of the movie, to encoding the individual images into a movie.

[edit] Developing a workflow for creating a movie

When creating a complex movie it is important to develop a systematic approach to creating the movie so that you can easily recreate the movie in case you want to go back and change some aspect of your movie. In the case of this tutorial, we will create the raw images of the data changing over time first and then add things like annotations and titles later so that those aspects of the movie can be changed without having to regenerate the raw frames, which are typically the most compute intensive to create. Here are the specific steps we will use to create our movie.

  1. Create the time animation images of the blobs moving through space.
  2. Create curve files of the volume and surface area of the blobs over time.
  3. Create curve images over time. There will be a volume curve image and surface area curve image for each image in the time animation.
  4. Composite the time animation images, the curve images, and annotations such as a color bar, a progress bar, and text with the simulation time.
  5. Create a title slide with titles and logos.
  6. Create a final sequence of images that starts with the title and fades into the animation over time of the composited images.

The data and scripts used to create the movie can be downloaded from the following location.

If you download and extract the data associated with this tutorial it will be organized into several directories.

  • data
  • images
  • scripts

The data directory contains the simulation data, the images directory contains images that will be used as annotations, and the scripts directory that contains a collection of scripts that are used to create the movie. The file also contains a README file, that describes how to run the scripts.

It is recommended that you follow a similar organization when creating your own movies.

[edit] Creating the time animation of the blobs moving through space

There are a couple of ways to create the time animation with VisIt. The first method would be to create the time animation with VisIt and save the session. Then you can create a script that loads the session and loops over the time states, saving images for each time state. The second method would be turn on command recording and creating the animation. Then you could take the recorded commands and create a script from them. If you need more information on using these techniques, you can review the content in the movie making and scripting tutorials.

We are going to use the second method to create the time animation. The script starts off by opening the database and creating a pseudocolor plot with an isovolume operator applied to extract the exterior surface of the blobs. Next, it turns off all the annotations and sets the view. It is highly recommended that you turn off all the annotations so that they can be changed at a later time without having to go through the compute intensive step of creating the time animation images again. Lastly, the script loops over the time states, saving an image for each time state. It also encodes a movie of the resulting images to easily preview the time animation.

Here is the script.

import sys
import os
 
from os.path import join as pjoin
 
#
# Open the database and create the plots.
#
OpenDatabase("data/blobs.visit", 0)
AddPlot("Pseudocolor", "Density", 1, 1)
AddOperator("Isovolume", 1)
SetActivePlots(0)
isovolumeAtts = IsovolumeAttributes()
isovolumeAtts.lbound = 2.5
isovolumeAtts.ubound = 1e+37
isovolumeAtts.variable = "Pressure"
SetOperatorOptions(isovolumeAtts, 1)
DrawPlots()
 
annotationAtts = AnnotationAttributes()
annotationAtts.axes2D.visible = 0
annotationAtts.axes3D.visible = 0
annotationAtts.axes3D.triadFlag = 0
annotationAtts.axes3D.bboxFlag = 0
annotationAtts.userInfoFlag = 0
annotationAtts.databaseInfoFlag = 0
annotationAtts.timeInfoFlag = 1
annotationAtts.legendInfoFlag = 0
annotationAtts.backgroundColor = (0, 0, 0, 255)
annotationAtts.foregroundColor = (255, 255, 255, 255)
annotationAtts.backgroundMode = annotationAtts.Solid
annotationAtts.axesArray.visible = 1
SetAnnotationAttributes(annotationAtts)
 
view3DAtts = View3DAttributes()
view3DAtts.viewNormal = (-0.881431, 0.457422, 0.117665)
view3DAtts.focus = (0, 0, 80)
view3DAtts.viewUp = (0.47231, 0.854655, 0.215614)
view3DAtts.viewAngle = 30
view3DAtts.parallelScale = 84.8528
view3DAtts.nearPlane = -169.706
view3DAtts.farPlane = 169.706
view3DAtts.imagePan = (0.0502111, 0.170633)
view3DAtts.imageZoom = 1.13693
view3DAtts.perspective = 1
view3DAtts.eyeAngle = 2
view3DAtts.centerOfRotationSet = 0
view3DAtts.centerOfRotation = (0, 0, 80)
view3DAtts.axis3DScaleFlag = 0
view3DAtts.axis3DScales = (1, 1, 1)
view3DAtts.shear = (0, 0, 1)
view3DAtts.windowValid = 1
SetView3D(view3DAtts)
 
#
# Set the basic save options.
#
saveAtts = SaveWindowAttributes()
saveAtts.family = 0
saveAtts.format = saveAtts.PNG
saveAtts.resConstraint = saveAtts.NoConstraint
saveAtts.width = 1280
saveAtts.height = 720
 
#
# Create the output directory structure.
#
outputDir = "output"
timeAnimationDir = pjoin(outputDir, "time_animation")
outputBase = pjoin(timeAnimationDir, "blobs%04d.png")
 
if not os.path.isdir(outputDir):
    os.mkdir(outputDir)
if not os.path.isdir(timeAnimationDir):
    os.mkdir(timeAnimationDir)
 
#
# Loop over the time states saving an image for each state.
#
nTimeSteps = TimeSliderGetNStates()
 
for timeStep in range(0, nTimeSteps):
    TimeSliderSetState(timeStep)
    saveAtts.fileName = outputBase % timeStep
    SetSaveWindowAttributes(saveAtts)
    SaveWindow()
 
#
# Encode a movie of the raw images
#
from visit_utils import *
 
movieName = pjoin(outputDir, "blobs_raw.mpg")
encoding.encode(outputBase, movieName, fdup=2)
 
quit()

To run the script, type the following command in a command window.

visit -cli -nowin -s scripts/render_time_animation.py

This will create the raw images of the time animation. They will look something like this.

Image:Visit_tutorial_raw_image.png

[edit] Creating curve files of the volume and surface area of the blobs over time

The first step is to create a json file with the times associated with each of the files. The script first calls the get_times function to get the simulation times from the database metadata for the file. It then calls the create_json_file function to write out the times as a JSON file.

Here is the script.

import json
import math
 
#
# Get the times from the database metadata.
#
def get_times(database_name):
    meta_data = GetMetaData(database_name)
    times    = meta_data.GetTimes()
    return times
 
#
# Create a json file of the times in a database.
#
def create_json_file(database_name, output_name, scale=1, shift=0):
    times = get_times(database_name)
    times2 = [times[0] / scale + shift,]
    for i in range(1, len(times)):
        times2.append(times[i] / scale + shift)
 
    if not output_name is None:
        json.dump(times2, open(output_name, "w"), indent=2)
 
#
# Process the command line options.
#
output_name = None
scale = 1.
shift = 0.
database_name = Argv()[0]
if len(Argv()) > 1:
    output_name = Argv()[1]
if len(Argv()) > 2:
    scale = float(Argv()[2])
if len(Argv()) > 3:
    shift = float(Argv()[3])
 
#
# Do it.
#
create_json_file(database_name, output_name, scale, shift)
 
quit()

To run the script, type the following command.

visit -cli -nowin -s scripts/visit_ds_times.py data/blobs.visit output/blobs_times.json

The next step is to create the curve files that contain points for the volume over time and the surface area over time. First, the script opens the database and creates the same Pseudocolor plot with an Isovolume operator applied that it created for the time animation. Next, it does a query over time of volume and saves the results in a curve file. Finally, the script does the same thing with a 3D surface area query over time.

Here is the script.

import sys
import os
 
from os.path import join as pjoin
 
#
# Create the output directory structure.
#
outputDir = "output"
volumeName = pjoin(outputDir, "volume")
surfaceName = pjoin(outputDir, "surface")
 
#
# Open the database and create the plots.
#
OpenDatabase("data/blobs.visit", 0)
AddPlot("Pseudocolor", "Density", 1, 1)
AddOperator("Isovolume", 1)
SetActivePlots(0)
SetActivePlots(0)
isovolumeAtts = IsovolumeAttributes()
isovolumeAtts.lbound = 2.5
isovolumeAtts.ubound = 1e+37
isovolumeAtts.variable = "Pressure"
SetOperatorOptions(isovolumeAtts, 1)
DrawPlots()
 
#
# Do the volume query over time.
#
SetActiveWindow(1)
SetQueryFloatFormat("%g")
QueryOverTime("Volume", end_time=199, start_time=0, stride=1)
 
#
# Save the curve.
#
SetActiveWindow(2)
saveAtts = SaveWindowAttributes()
saveAtts.outputToCurrentDirectory = 1
saveAtts.outputDirectory = "."
saveAtts.fileName = volumeName
saveAtts.family = 0
saveAtts.format = saveAtts.CURVE
saveAtts.width = 1024
saveAtts.height = 1024
saveAtts.screenCapture = 0
saveAtts.saveTiled = 0
saveAtts.quality = 80
saveAtts.progressive = 0
saveAtts.binary = 0
saveAtts.stereo = 0
saveAtts.compression = saveAtts.PackBits
saveAtts.forceMerge = 0
saveAtts.resConstraint = saveAtts.NoConstraint
saveAtts.advancedMultiWindowSave = 0
SetSaveWindowAttributes(saveAtts)
SaveWindow()
DeleteActivePlots()
 
#
# Do the surface area over time query.
#
SetActiveWindow(1)
QueryOverTime("3D surface area", end_time=199, start_time=0, stride=1)
 
#
# Save the curve.
#
SetActiveWindow(2)
saveAtts = SaveWindowAttributes()
saveAtts.outputToCurrentDirectory = 1
saveAtts.outputDirectory = "."
saveAtts.fileName = surfaceName
saveAtts.family = 0
saveAtts.format = saveAtts.CURVE
saveAtts.width = 1024
saveAtts.height = 1024
saveAtts.screenCapture = 0
saveAtts.saveTiled = 0
saveAtts.quality = 80
saveAtts.progressive = 0
saveAtts.binary = 0
saveAtts.stereo = 0
saveAtts.compression = saveAtts.PackBits
saveAtts.forceMerge = 0
saveAtts.resConstraint = saveAtts.NoConstraint
saveAtts.advancedMultiWindowSave = 0
SetSaveWindowAttributes(saveAtts)
SaveWindow()
DeleteActivePlots()
 
quit()

To run the script, type the following command.

visit -cli -nowin -s scripts/create_time_curves.py

These files are viewable in VisIt.

[edit] Creating the curve images over time

The curve plots are created in the render_volume_plot and render_surface_plot functions in the script. They use the qplot module within the visit_utils module. They use a PropertyTree to describe all the options for the plots. The master PropertyTree defines the properties of the image and coordinate axes. The master PropertyTree also contains a list PropertyTrees to describe the plots in the image. In this case there are three plots, including the curve of volume as a function of time as well as a line and dot highlighting the point associated with the current time. The master PropertyTree also contains an empty PropertyTree that is used to describe the annotation properties.

Here is the script.

from visit_utils import *
from visit_utils.qplot import *
 
from os.path import join as pjoin
 
import json
 
output_dir = "output"
 
#
# Create the images of the volume over time.
#
def render_volume_plot(output_base, time_step, time):
    curve_name = pjoin(output_dir, "volume.curve")
    n_curves  = 1
    curves = [PropertyTree() for i in xrange(n_curves)]
    curves[0].file = curve_name
    curves[0].index = 0
 
    p = PropertyTree()
    p.size = (600, 250)
    p.view = (0., 200., 10000., 60000.)
    p.axes.x_ticks = 5
    p.axes.y_ticks = 0
    p.bg_color = (0, 0, 0, 255)
    p.axis.tick_width = 2
    p.axis.tick_length = .5
    p.right_margin = 8
    p.left_margin = 10
    p.bottom_margin = 35
    p.top_margin = 6
    p.labels.titles_font.name = "Times New Roman"
    p.labels.titles_font.size = 18
    p.labels.titles_font.bold = True
    p.labels.labels_font.name = "Times New Roman"
    p.labels.labels_font.size = 15
    p.labels.labels_font.bold = True
    p.labels.x_title = "Time  (\\0x03bcs)"
    p.labels.y_title = "Volume"
    p.labels.x_title_offset = 27
    p.labels.y_title_offset = 5
    p.labels.x_labels_offset = 0
    p.labels.y_labels_offset = 2
    p.labels.x_labels = p.axes.x_ticks
    p.labels.y_labels = p.axes.y_ticks
    p.plots = [PropertyTree() for i in xrange(n_curves)]
    p.plots[0].type = "line"
    p.plots[0].curve = curves[0]
    p.plots[0].color = (0, 185, 50, 255)
    # check for time step in the tracer range
    if time >= p.view[0] and time <= p.view[1]:
        tracer_dot = PropertyTree()
        tracer_dot.curve = curves[0]
        tracer_dot.type = "tracer_dot"
        tracer_dot.color = (0, 185, 50, 255)
        tracer_dot.point_size = 12
        tracer_dot.tracer_x = time
        tracer_line = PropertyTree()
        tracer_line.curve = curves[0]
        tracer_line.type = "tracer_line"
        tracer_line.color = (0, 185, 50, 255)
        tracer_line.tracer_x = time
        p.plots.append(tracer_dot)
        p.plots.append(tracer_line)
 
    p.annotations = [PropertyTree() for i in xrange(0)]
    scene = CurveScene(p)
    scene.render(output_base % time_step)
    return (output_base % time_step)
 
#
# Create the images of the surface area over time.
#
def render_surface_plot(output_base, time_step, time):
    curve_name = pjoin(output_dir, "surface.curve")
    n_curves  = 1
    curves = [PropertyTree() for i in xrange(n_curves)]
    curves[0].file = curve_name
    curves[0].index = 0
 
    p = PropertyTree()
    p.size = (600, 250)
    p.view = (0., 200., 5000., 12000.)
    p.axes.x_ticks = 5
    p.axes.y_ticks = 0
    p.bg_color = (0, 0, 0, 255)
    p.axis.tick_width = 2
    p.axis.tick_length = .5
    p.right_margin = 8
    p.left_margin = 10
    p.bottom_margin = 35
    p.top_margin = 6
    p.labels.titles_font.name = "Times New Roman"
    p.labels.titles_font.size = 18
    p.labels.titles_font.bold = True
    p.labels.labels_font.name = "Times New Roman"
    p.labels.labels_font.size = 15
    p.labels.labels_font.bold = True
    p.labels.x_title = "Time  (\\0x03bcs)"
    p.labels.y_title = "Surface Area"
    p.labels.x_title_offset = 27
    p.labels.y_title_offset = 5
    p.labels.x_labels_offset = 0
    p.labels.y_labels_offset = 2
    p.labels.x_labels = p.axes.x_ticks
    p.labels.y_labels = p.axes.y_ticks
    p.plots = [PropertyTree() for i in xrange(n_curves)]
    p.plots[0].type = "line"
    p.plots[0].curve = curves[0]
    p.plots[0].color = (180, 0, 50, 255)
    # check for time step in the tracer range
    if time >= p.view[0] and time <= p.view[1]:
        tracer_dot = PropertyTree()
        tracer_dot.curve = curves[0]
        tracer_dot.type = "tracer_dot"
        tracer_dot.color = (180, 0, 50, 255)
        tracer_dot.point_size = 12
        tracer_dot.tracer_x = time
        tracer_line = PropertyTree()
        tracer_line.curve = curves[0]
        tracer_line.type = "tracer_line"
        tracer_line.color = (180, 0, 50, 255)
        tracer_line.tracer_x = time
        p.plots.append(tracer_dot)
        p.plots.append(tracer_line)
 
    p.annotations = [PropertyTree() for i in xrange(0)]
    scene = CurveScene(p)
    scene.render(output_base % time_step)
    return (output_base % time_step)
 
blobs_times_file = pjoin(output_dir, "blobs_times.json")
times = json.load(open(blobs_times_file))
 
#
# Create the volume curve images.
#
volume_dir = pjoin(output_dir, "volume_curve")
volume_base = pjoin(volume_dir, "volume%04d.png")
if not os.path.isdir(volume_dir):
    os.mkdir(volume_dir)
 
for time_step in range (0, len(times)):
    time = times[time_step]
    render_volume_plot(volume_base, time_step, time)
 
#
# Create the surface curve images.
#
surface_dir = pjoin(output_dir, "surface_curve")
surface_base = pjoin(surface_dir, "surface%04d.png")
if not os.path.isdir(surface_dir):
    os.mkdir(surface_dir)
 
for time_step in range (0, len(times)):
    time = times[time_step]
    render_surface_plot(surface_base, time_step, time)
 
quit()

To run the script, type the following command.

visit -cli -nowin -s scripts/render_curves.py

This will create the curve images of volume over time and surface area over time. They will look like the following images.

Image:Visit_tutorial_volume_curve.png

Image:Visit_tutorial_surface_curve.png

[edit] Compositing the raw images, the curve images and annotations

The compositing is done with a data flow network that composites the images and adds the annotations. Data flow networks consist of filters that have inputs and outputs. The filters have zero or more inputs and zero or more outputs. All the filter inputs and outputs are file names. The filters will read the files specified in the inputs, perform some type of operation on the data read from the files, and generate output files and place the names of the output files on the outputs. The filters, filter inputs and filter outputs all have names, which are used to connect the inputs and outputs of the filters. There are three types of filters:

  • Source filters that have an output but no inputs.
  • Regular filters that have one of more inputs and a single output.
  • Sink filters that have inputs but no outputs.

Here is a diagram of the inputs and outputs for the over filter, which is used extensively in the script.

Image:Visit_tutorial_over_filter.png

The over filter reads the image from the file specified in the over input and overlays it on the image read from the file specified in the under input and stores the result in a file whose name is output in the output output.

The compositing and annotating are done with two scripts. The first one creates a data flow network that composites the images together and the second script is used by one of the filters in the data flow network to create the annotations for the movie sequence.

Here is a diagram of the data flow network that the does the compositing and annotating.

Image:Visit_tutorial_composite_data_flow.png

The filters perform the following functions.

  • background - Create a black background.
  • blob_file - Output the name of the blob image file.
  • blob_over - Overlay the blob image on the background.
  • black_bar - Create a black bar.
  • black_bar_over - Overlay the black bar on the composite image.
  • volume_file - Output the name of the volume curve image file.
  • volume_over - Overlay the volume curve image on the composite image.
  • surface_file - Output the name of the surface curve image file.
  • surface_over - Over the the surface curve.
  • annotate - Add annotations to the composite image.
  • rename - Rename the composite image.

Now we will take a look at the script that creates the data flow network. The top third of the script defines some constants and sets up a directory hierarchy for storing intermediate and final results. The second third of the script defines each of the filters in the data flow network. The last third of the script connects the filters into a network and executes them. One item to note is that the execute function is call with a StateVector that ranges from 0 to 199 and is used to loop over the 200 time states of the simulation.

Here is the script.

from visit_flow.core import *
from visit_flow.filters import file_ops, imagick, cmd
 
import sys
import os
import subprocess
 
from os.path import join as pjoin
 
from visit_utils import *
 
def composite():
    w = Workspace()
    w.register_filters(file_ops)
    w.register_filters(imagick)
    w.register_filters(cmd)
    ctx = w.add_context("imagick", "root")
 
    output_width = 1280
    output_height = 720
    plots_width = 600
    plots_height = 250
    n_time_states = 200
 
    #
    # Set up some path information.
    #
    output_dir = "output"
 
    blobs_base    = pjoin(os.path.abspath("."), output_dir, "time_animation", "blobs%04d.png")
    volume_base  = pjoin(os.path.abspath("."), output_dir, "volume_curve", "volume%04d.png")
    surface_base = pjoin(os.path.abspath("."), output_dir, "surface_curve", "surface%04d.png")
 
    #
    # Create the output directory structure.
    #
    comp_dir    = pjoin(output_dir, "composite_animation")
    comp_tmp_dir = pjoin(output_dir, "_tmp")
 
    if not os.path.isdir(comp_dir):
        os.mkdir(comp_dir)
    if not os.path.isdir(comp_tmp_dir):
        os.mkdir(comp_tmp_dir)
    ctx.set_working_dir(comp_tmp_dir)
 
    output_base = pjoin(comp_dir, "blobs.%dx%d.%s.png" % (output_width, output_height, "%04d"))
 
    #
    # Create the data flow network. Note that we must use the name "svec"
    # below for things to work properly.
    #
    state_vector = StateVectorGenerator(StateSpace({"index": n_time_states}))
    svec = state_vector
 
    ctx.add_filter("fill", "background", {"width": output_width, "height": output_height, "color": "black"})
    ctx.add_filter("fill", "black_bar", {"width": output_width, "height": plots_height, "color": "black"})
 
    ctx.add_filter("over", "blobs_over")
 
    ctx.add_filter("over", "black_bar_over", {"x": 0, "y": 460})
    ctx.add_filter("over", "volume_over", {"x": 20, "y": 460})
    ctx.add_filter("over", "surface_over", {"x": 660, "y": 460})
 
    ctx.add_filter("file_name", "blobs_file", {"pattern": blobs_base})
    ctx.add_filter("file_name", "volume_file", {"pattern": volume_base})
    ctx.add_filter("file_name", "surface_file", {"pattern": surface_base})
 
    python_command = "visit -ni -nowin -cli -s"
    annotate_script = pjoin(os.path.abspath("."), "scripts", "annotate.py")
    ctx.add_filter("cmd", "annotate",
        {"cmd": "%s %s {index}" % (python_command, annotate_script),
         "obase": pjoin(comp_tmp_dir, "annotate_comp_%s.png")})
 
    ctx.add_filter("file_rename", "rename", {"pattern": output_base})
 
    w.connect("background", "blobs_over:under")
    w.connect("blobs_file", "blobs_over:over")
 
    w.connect("blobs_over", "black_bar_over:under")
    w.connect("black_bar", "black_bar_over:over")
 
    w.connect("black_bar_over", "volume_over:under")
    w.connect("volume_file", "volume_over:over")
 
    w.connect("volume_over", "surface_over:under")
    w.connect("surface_file", "surface_over:over")
 
    w.connect("surface_over", "annotate:in")
 
    w.connect("annotate", "rename:in")
 
    #
    # Execute the data flow network.
    #
    w.execute(svec)
 
composite()
 
quit()

Now we will take a look at the script that creates the annotations. There are three functions that create annotations. The time_items function creates the time annotation, which displays the simulation time in the upper right hand corner of the image. The legend_items function creates the legend in the left half of the image and consists of a color bar with a title and labels indicating that the top of the color bar represents high values and the bottom of the color bar represents low values. The color bar is an image that was created by cropping an image of a Pseudocolor plot from VisIt to just the color bar. The progress_bar_items function creates a progress bar that goes along the top of the image that provides a visual cue of the progress of the simulation. The time slider can be broken down into multiple sections, where each section has a label and color. In our movie, the color bar only has a single section.

Here is the script.

import sys
 
from visit_utils import *
from visit_utils.qannote import *
 
import json
import os
from os.path import join as pjoin
 
image_dir = "images"
output_dir = "output"
 
blobs_times = pjoin(output_dir, "blobs_times.json")
hot_cold_color_bar = pjoin(image_dir, "Hot_cold_color_bar_49x257.png")
 
width = 1280
height = 720
 
def fetch_proper_time(time_state):
    times = json.load(open(blobs_times))
    time = times[time_state]
    return time
 
def legend_items(foreground, background, time_state):
    x_offset = 80
    y_offset = 120
 
    items = [
            Image({ "image": hot_cold_color_bar,
                    "x": x_offset, "y": y_offset,
                    "vert_align": "bottom",
                    "horz_align": "center"}),
            Text({  "text": "Density",
                    "color": foreground,
                    "x": x_offset, "y": y_offset - 14,
                    "horz_align": "center",
                    "vert_align": "bottom",
                    "font/size": 18,
                    "font/bold": True}),
            Text({  "text": "High",
                    "color": foreground,
                    "x": x_offset + 30, "y": y_offset - 2,
                    "horz_align": "left",
                    "vert_align": "top",
                    "font/size": 14,
                    "font/bold": True}),
            Text({  "text": "Low",
                    "color": foreground,
                    "x": x_offset + 30, "y": y_offset + 256,
                    "horz_align": "left",
                    "vert_align": "bottom",
                    "font/size": 14,
                    "font/bold": True})
            ]
    return items
 
def time_items(foreground, background, time_state):
    time = fetch_proper_time(time_state)
    x_offset = width * 0.86
    y_offset = 30
    value_offset = 160
 
    items = [
            Text({ "text": "Time:",
                   "x": x_offset + 5, "y": y_offset,
                   "color": foreground,
                   "horz_align": "left",
                   "vert_align": "bottom",
                   "font/name": "Arial",
                   "font/size": 18,
                   "font/bold": True}),
            Text({ "text": "%3.0f\\0x03bcs" % time,
                   "x": x_offset + value_offset, "y": y_offset,
                   "color": foreground,
                   "horz_align": "right",
                   "vert_align": "bottom",
                   "font/name": "Arial",
                   "font/size": 16,
                   "font/bold": True})
            ]
    return items
 
def progress_bar_items(foreground, background, time_state):
    time = fetch_proper_time(time_state)
    total_time = 199.0
    bar_width = 1200
 
    s1 = 1.0
    bar_position = (float(time) / total_time)
 
    items = [
            MultiProgressBar({ "x": 40, "y": 37,
                               "width": bar_width, "height": 15,
                               "bg_color": (0, 0, 0, 0),
                               "force_labels": True,
                               "segment/ranges": [s1],
                               "segment/labels": [""],
                               "segment/colors": [(56, 216, 233, 255)],
                               "position": bar_position})
            ]
    return items
 
def render_overlay(foreground, background, time_state, input_file, output_file):
    background_image = Image({ "image": input_file})
    items = [background_image]
    items.extend(legend_items(foreground, background, time_state))
    items.extend(time_items(foreground, background, time_state))
    items.extend(progress_bar_items(foreground, background, time_state))
    Canvas.render(items, background_image.size(), output_file)
 
def render_black(time_state, input_file, output_file):
    render_overlay((255, 255, 255, 255), (0, 0, 0, 255),
        time_state, input_file, output_file)
 
time_state = int(sys.argv[1])
input_file = sys.argv[2]
output_file = sys.argv[3]
 
render_black(time_state, input_file, output_file)

To run the script, type the following command.

visit -cli -nowin -s scripts/composite.py

This will create the composited images over time. They will look like the following image.

Image:Visit_tutorial_composited_image.png

[edit] Creating the title slide

The title slide is created using a script that is similar to the script used to create the annotations for the simulation animation frames. The title_items function creates all the items in the title slide. First it creates the three logos that appear in the lower left hand corner. The images for the logos can typically found on the internet by doing searches. Next it adds a document identifier in the lower right hand corner followed by the main title in the center of the image. Finally, it adds a text box with an auspices statement that indicates funding information.

Here is the script.

#!/usr/bin/env python
 
from visit_utils import *
from visit_utils.qannote import *
from visit_utils.qannote.items import Rect
 
from os.path import join as pjoin
 
AUSPICES_TEXT  = "This work was performed under the auspices of the U.S. "
AUSPICES_TEXT += "Department of Energy by Lawrence Livermore National "
AUSPICES_TEXT += "Laboratory under contract DE-AC52-07NA27344. Lawrence "
AUSPICES_TEXT += "Livermore National Security, LLC"
 
images_dir = "images"
output_dir = "output"
 
width = 1280
height = 720
 
def create_text_box(text, x_offset, y_offset, width, height, font_size,
    foreground, background):
    items = [
            TextBox( {"x": x_offset, "y": y_offset,
                      "width": width, "height": height,
                      "fg_color": foreground,
                      "text": text,
                      "font/bold": False,
                      "font/size": font_size})
            ]
    return items
 
def title_items(foreground, background):
    doe_logo = pjoin(images_dir, "DOE_logo_white_174x44.png")
    nnsa_logo = pjoin(images_dir, "NNSA_logo_white_161x44.png")
    llnl_logo = pjoin(images_dir, "LLNL_logo_white_258x44.png")
    items = [
            Rect(  { "x": 0, "y": 0,
                     "width": width, "height": height,
                     "color": background}),
            Image( { "image": doe_logo,
                     "x": 15, "y": height - 55,
                     "horz_align": "left",
                     "vert_align": "bottom"}),
            Image( { "image": nnsa_logo,
                     "x": 225, "y": height - 55,
                     "horz_align": "left",
                     "vert_align": "bottom"}),
            Image( { "image": llnl_logo,
                     "x": 420, "y": height - 55,
                     "horz_align": "left",
                     "vert_align": "bottom"}),
            Text(  { "text": "LLNL-VIDEO-694066",
                     "color": foreground,
                     "x": width - 10, "y": height - 5,
                     "horz_align": "right",
                     "vert_align": "bottom",
                     "font/size": 17}),
            Text(  { "text": "Blobs Flowing Through Space",
                     "color": foreground,
                     "x": width / 2, "y": height * .3,
                     "horz_align": "center",
                     "vert_align": "center",
                     "font/size": 35}),
            Text(  { "text": "VisIt Advanced Movie Making Tutorial",
                     "color": foreground,
                     "x": width / 2, "y": height * .4,
                     "horz_align": "center",
                     "vert_align": "center",
                     "font/size": 35})
            ]
    items.extend(create_text_box(AUSPICES_TEXT,
                                 200, height * .7,
                                 880, 54, 15, foreground, background))
    return items
 
def render_title(foreground, background):
    items = title_items(foreground, background)
    output_file = pjoin(output_dir, "title.png")
    Canvas.render(items, (width, height), output_file)
 
render_title((255, 255, 255, 255), (0, 0, 0, 255))
 
quit()

To run the script, type the following command.

visit -cli -nowin -s scripts/title_slide.py

This will create the following image.

Image:Visit_tutorial_title_slide.png

[edit] Creating the final movie

The final movie is created using a script that combines all the images into a linear sequence that is then encoded into an mpeg movie. There are two types of operations that are performed to create the final sequence of images. The first is the hold function that copies the image to the next entry in the sequence. The second is the blend function that blends two images by a specified amount to the next entry in the sequence. The script performs the following steps.

  • Hold the title image for 100 frames.
  • Blend the title image into the first image of the time animation in 50 frames.
  • Hold the first image of the time animation for 100 frames.
  • Go through the time animation.
  • Hold the last image of the time animation for 100 frames.

Once the final sequence of images is created, the frames are encoded into an mpeg movie.

This will generate 550 frames, which at 30 frames per second, will last just a little less that 20 seconds.

Here is the script.

import os
import shutil
 
from visit_utils import *
 
from os.path import join as pjoin
 
def hold(a, output_base, index):
    command = "cp %s %s" % (a, output_base % index)
    common.sexe(command, echo=True)
 
def blend(a, b, percent, output_base, index):
    command = "composite %s -blend %f %s %s" % (b, percent, a, output_base % index)
    common.sexe(command, echo=True)
 
def resize(a, output_base, percent, index):
    command = "convert %s -resize %d%% %s" % (a, percent, output_base % index)
    common.sexe(command, echo=True)
 
#
# Create the output directory.
#
output_dir = "output"
movie_dir = pjoin(output_dir, "final_movie")
low_res_movie_dir = pjoin(output_dir, "final_low_res_movie")
 
if not os.path.isdir(output_dir):
    os.mkdir(output_dir)
if not os.path.isdir(movie_dir):
    os.mkdir(movie_dir)
if not os.path.isdir(low_res_movie_dir):
    os.mkdir(low_res_movie_dir)
 
title_name = pjoin(output_dir, "title.png")
blobs_base = pjoin(output_dir, "composite_animation", "blobs.1280x720.%04d.png")
movie_name = pjoin(output_dir, "blobs_final.1280x720.mpg")
low_res_movie_name = pjoin(output_dir, "blobs_final.640x360.mpg")
 
output_base = pjoin(movie_dir, "comp.final.%04d.png")
low_res_output_base = pjoin(low_res_movie_dir, "comp.final.%04d.png")
 
#
# Hold the title.
#
index = 0
for i in range(0, 100):
    hold(title_name, output_base, index)
    index += 1
 
#
# Blend the title to the first movie frame.
#
for i in range(0, 50):
    blend(title_name, blobs_base % 0, i * 2, output_base, index)
    index += 1
 
#
# Hold the first frame of the movie.
#
for i in range(0, 100):
    hold(blobs_base % 0, output_base, index)
    index += 1
 
#
# Do the movie, duplicating each image.
#
for i in range(0, 200):
    hold(blobs_base % i, output_base, index)
    index += 1
    hold(blobs_base % i, output_base, index)
    index += 1
 
#
# Hold the last frame of the movie.
#
for i in range(0, 100):
    hold(blobs_base % 199, output_base, index)
    index += 1
 
#
# Encode the movie.
#
encoding.encode(output_base, movie_name, fdup=1)
 
#
# Encode the low resolution version of the movie.
#
for i in range(0, index):
    resize(output_base % i, low_res_output_base, 50, i)
 
encoding.encode(low_res_output_base, low_res_movie_name, fdup=1)
 
quit()

To run the script, type the following command.

visit -cli -nowin -s scripts/create_final_movie.py

This creates the final movie in the output directory. It also creates a half size version of the movie as well. Typically, you will want to create the movie at the highest resolution you will need and then create other resolution movies from it using the technique at the bottom of the script.

Personal tools