Get Teem at SourceForge.net. Fast, secure and Free Open Source software downloads


 
Introduction to bane, Using gkms

This tutorial does not work with any recent versions of Teem. Sorry.

What follows is a demonstration of how the bane library can be used, via the gkms command-line program, to generate opacity functions for direct volume rendering of scalar fields. gkms is made from bin/gkms.c in the src/bane directory of teem. It is compiled and copied to teem's architecture-specific bin directory as part of doing a "make install" on teem.

This also ends up being an introduction to the ideas behind my my MS thesis and VolVis98 paper. While familiarity with those would be helpful here, it is not essential.

The bane library is not especially optimized for speed, because having a correct implementation was more important to me than a fast one. Not to worry- the slowest part of the analysis takes at most about a minute. All the subsequent steps which might be part of an interactive loop are much, much faster.

STEP 0: Preparing the data
Let's use a standard but non-trivial dataset: the engine-block CT scan. We will create a volume dataset, in nrrd format, called engine-crop.nrrd.

All the subsequent steps in opacity function generation use the executable called gkms, which is built as part of installing teem. Running gkms with no arguments produces a usage message which lists its capabilities:

          --- Semi-Automatic Generation of Transfer Functions ---
gkms hvol ... Make histogram volume
gkms scat ... Make V-G and V-H scatterplots
gkms info ... Project histogram volume for opacity function generation
 gkms pvg ... Create color-mapped pictures of p(v,g)
gkms opac ... Generate opacity functions
gkms mite ... Modify opacity function to work with "mite"
 gkms txf ... Create Levoy-style triangular 2D opacity functions
The following steps will use each of these capabilities in sequence.

STEP 1: Creating the histogram volume (with gkms hvol ...)
The data structure which is used as the basis for all later analysis steps is called the "histogram volume". It is a record, in the form of a histogram, of the relationship between three-quantities which matter for describing the boundaries present in a dataset: "data value" (same as gray value, or intensity, or scalar value), gradient magnitude, and the second directional derivative along the gradient direction.

STEP 2: Inspecting the histogram volume with scatterplots (with gkms scat ...)
Once you've made the histogram volume, its nice to be able to look at it. There are at least two reasons for this. First, you want to be sure that the inclusion ranges set in the previous step were appropriate. Specifically, important structures at high derivatives may have been clipped out, or the inclusion was too generous, which means that the interesting variation of derivative value was compressed to a small number of bins. Second, looking at these scatterplots may be all you need to start setting opacity functions- since they can tell you about the boundaries present in the data. Also, looking at the scatterplots for known datasets can increase your intuition for the kinds of patterns measured by the histogram volume, and how they relate to the data itself.

STEP 3: Distilling the Histogram Volume (with gkms info ... )
My method for analyzing the histogram volume in order to produce an opacity function does not actually need look at the entire histogram volume itself; it only needs to look at certain projections, or distillations, of its contents. It is these projections which will be analyzed in later stages to produce opacity functions. For lack of a better term, I call these projection "info" files.

STEP 4: Making pictures of p(v,g) (with gkms pvg ... )
The colormapped graph of the two-dimensional position function p(v,g) is effectively a portrait of the boundary information which was captured by the histogram volume. Although most relevent to the generation of two-dimensional opacity functions, it also makes sense to inspect this for making one-dimensional opacity functions, just to learn how (if at all) appropriate 1-D opacity functions are to the structures which exist in the data. This also includes a small digression on what I view as the most problematic (weak) aspect of my Master's thesis research.

STEP 5: Making opacity functions (with gkms opac ... )
Finally, opacity functions. The reason their generation is "semi-automatic" is that the user has to supply a final ingredient; a mapping boundary distance to opacity, the so-called "boundary emphasis function". Plus, the "sigma" and "gthresh" knobs may need some adjustment ...

SUMMARY: All the steps, the remaining parameters
A review of all the steps outlined in the previous sections, and a review of the various parameters that are left to play with using this method.