Scene Description Language



The mental images scene description language allows reading a scene from an ascii or binary file called the .mi file. It contains a list of commands and scene entities. Commands are instructions that set options such as verbosity or external shaders to be linked; scene entities describe geometrical objects and shaders and other components.
Animations are rendered by setting up the first frame and rendering it, followed by scene modifications and another render command for every successive frame.

This document describes version 2 of the .mi format, abbreviated as .mi2. Although 2.x still supports the frame-based scene definition method supported by version 1.x of , this is not recommended for future designs and not described in this manual. Both versions of the format can easily be distinguished: .mi1 files contain frame statements while .mi2 files contain instance statements.
The recommended file extension is .mi regardless of the version.

This section discusses the parts that make up an .mi file. In this section, a less formal syntax is used for the syntax description: a bar ``|'' denotes alternatives, items enclosed in tall square brackets ``[ ]'' are optional, and the ellipsis ``...'' denotes omission, as in ``zero or more repetitions of the preceding construct.'' Literal text is set in teletype, while variable metasymbols are set in italics. All other punctuation characters are literals. Strings are quoted with double quotes; this includes all names. Names, such as material, light, or object names, need not be quoted, but it is highly recommended to avoid conflicts with reserved wordsFuture versions may reserve more words than described in this manual, and to allow non-alphabetic characters. Without quotes, only lowercase and uppercase letters, underscores, and digits may be used; a digit may not be the first character of an unquoted name. No such restrictions apply to quoted names.

Integers are distinguished from floating point numbers by appending the suffix int, as in degreeint. Integers are an optional ``+'' or ``--'' sign followed by a sequence of digits ``0'' through ``9''. Floating-point numbers follow the same rules, but may optionally contain a decimal point ``.'' and an optional exponent. If the number begins with a decimal point, a leading zero is assumed. Exponential notation has the form nem, which is interpreted as n *10m. Strings can be distinguished from numbers because the grammar always forces them to be enclosed in double quotes.

The ``#'' character introduces comments, unless quoted, except between $code and $end code. Comments extend to the end of the line. Whitespace is ignored.

By convention, the first line of any .mi file should begin with the three characters #mi, followed by a blank (not a tab), followed by the partial or full version number of the earliest required mental ray version number. For the syntax described in this manual this is 2.0.17. This comment serves as a ``magic number'' that helps interactive programs or utilities like file to decide whether this is a .mi file or something else. It is not parsed by mental ray itself.


Shader Declarations

All shading functions linked with a code or link statement, and all shading functions built into must be declared. Shading functions accept a pointer to an arbitrary parameter structure as their third argument, and must know the structure declaration in order to put together the parameter block according to C structure layout conventions. Usually, declarations are included from a separate file using the $include statement. For a detailed description of shader declarations, see the chapter on writing shaders.

A declaration is a top-level statement that informs about the shader function name, the return type, and the types and names of all the parameters. Certain options can also be specified.

    declare shader 
        [type] "shader_name" ( 
            type "parameter_name", 
            type "parameter_name", 
            ... 
            type "parameter_name" 
        ) 
        [ version versionint ] 
        [ apply shader_type_list ] 
        [ options ] 
    end declare 
 

It is recommended that shader_name and parameter_name are enclosed in double quotes to disambiguate them from reserved keywords and to allow special characters such as punctuation marks. Note that old-style declarations of the form

    declare [type] "shader_name" (...) 
 

are also still supported for backwards compatibility, but they should not be used because they do not support versioning. The optional (but highly recommended) version is an arbitrary integer that identifies the shader version. The default is 0. See the discussion of shader versioning in the Writing Shaders chapter.

Parameter Types

The declaration gives the type and name of the shader to declare, which is the name of the C function and the name used when the shader is called, followed by a list of parameters with types. Names are normally quoted to disambiguate them from keywords reserved by . Commas separate parameter declarations. The following types are supported:

boolean
A boolean is either true or false. Possible values are on, off, true, and false.

integer
Integers are in the range -231 ...231-1.

scalar
Scalars are floating-point numbers, defined as an optional minus sign, a sequence of digits containing an optional decimal point at any place, followed by an optional decimal exponent. A decimal exponent is the letter e followed by a positive or negative integer. Examples are 1.0, .5, 2., 1.e4, or -2.3e-6 .

vector
A vector is a sequence of three scalars as defined above, describing the x, y, and z components of the vector.

transform
A transformation is a 4 *4 matrix of scalars, with the translation in the last row. The data structure consists of an array of 16 miScalars in row-major order.

color
A color is a sequence of three or four scalars as defined above, describing the red, blue, green, and alpha components of the color, in this order. If alpha is omitted, it defaults to 0.0.

shader
Shader names defined with the shader statement (not to be confused with the shader declaration statement) can be passed to other shaders that have parameters of this type. The shader receives a tag that it can call using mi_call_shader.

color texture
Color textures name a texture as defined by a color texture statement in the .mi file. That color texture statement names either a texture file, or a texture shader followed by a user parameter list. Note that a color texture does not name the texture shader directly. When a color texture is evaluated, it returns an RGBA color.

scalar texture
Scalar textures are equivalent to color textures, except that they name a scalar texture statement in the .mi file. When a scalar texture is evaluated, it returns a scalar (a single floating-point number). This is most often used to apply a texture map to a scalar material parameter such as transparency.

vector texture
Vector textures are another variation; they name a vector texture statement in the .mi file, which returns a vector when evaluated. Bump maps on materials are typical applications for vector textures.Since regards all types of textures as generic shading functions, the distinction between different texture types may disappear in future versions.
light
Lights specify a light as defined by a light statement in the .mi file, which names a light. Like textures, light parameters do not name light shaders directly.

geometry
Geometry references objects or instances. They are useful only for geometry shaders, which can introduce or modify geometric scene entities. Geometry shaders have a different shader API than other shaders. They are described in a separate part of this manual. Shaders and phenomena whose return type is geometry can be used in instance definition statements (see the instance top-level statement above).

material
Material references. Their purpose is to operate on materials in phenomenon definitions, which may contain materials in addition to shaders. Shaders and phenomena whose return type is material can be used in material definition statements (see the material top-level statement above).

struct { ... }
Structures define a sub-list of parameters. This is normally used to build arrays of structures, for example to declare an array of textures, each of which comes with a blending factor. The ellipsis ... stands for another comma-separated sequence of type/parameter_name pairs.

array type
Arrays are different from all other types in that they are not named. The array keyword is a prefix to any of the above types that turns a single value into a one-dimensional array of values. For example, array scalar "terms" declares a parameter named terms that is an array of scalars. The array size is unlimited. Arrays of arrays are not supported; but arrays of structs containing arrays can be used.

When choosing names, avoid double colons and periods, which have a special meaning when accessing interface parameters in phenomenon subshaders.

The return type of the shader must either be a simple type (any type except struct or array), or an unnamed struct containing only simple types. Unnamed means that there is no name between the struct keyword and the opening brace.

Shader Apply Flags

The apply statement allows specification of possible uses for the shader. The shader_type_list consists of a comma-separated list of one or more of the following keywords:

    lens                lens shader in a camera 
    material            material shader in a material 
    light               light shader 
    shadow              shadow shader in a material 
    environment         environment shader in a material or camera 
    volume              volume shader in a material or camera 
    texture             texture shader 
    photon              photon shader in a material 
    geometry            geometry shader 
    displace            displacement shader in a material 
    emitter             photon emitter shader in a light 
    output              output shader in a camera 
 

If the apply statement is missing, the applicability of a shader is unknown. This will commonly be the case for base shaders, for example, which can be used for any purpose. Apply lists help user interfaces to categorize shaders. At this time there are no checks to make sure shaders are used only in legal contexts. Future versions may use a material shader as shadow or photon shader if its applicability list allows it and there is no other shadow or photon shader listed in the material.

Declaration Options

Declarations of shaders (and phenomena, see below) allow a number of options to be specified in the declaration block. These options specify requirements of the shader or phenomenon, specifying conditions that must be met before the shader or phenomenon can run correctly, or information about the shader that tells how to call it. Before rendering, collects the requirements of all shaders defined in the scene, checks for conflicts, and adjusts global options and camera parameters to suit the shaders.

For example, if a shader specifies that it can operate only if ray tracing is enabled, it should specify the trace on option to tell to enable ray tracing even if no ray tracing was specified in the global options statement.

Here is the complete list of available options. If an option is not present, the default is ``don't care'' unless otherwise noticed. These options are equivalent to the corresponding options given in the options top-level statement; refer to the description of option blocks for more details on the operation of these options.

scanline on|off
The scanline rendering algorithm must be turned on or off, respectively.

trace on|off
Ray tracing must be turned on or off. For example, lens shaders that modify the ray direction should set this to on.

shadow off
Shadows must be turned off for this shader to function.

shadow on
Shadowing must be enabled, either regular, sorted, or segmented.

shadow sort
Shadowing must be enabled, either sorted or segmented. Regular is not sufficient.

shadow segment
Segmented shadows must be enabled. Regular or sorted shadows are not sufficient.

face front|back|both
Intersection testing should consider at least front-facing, back-facing, or both front-facing and back-facing geometry, respectively. ``At least'' means that a request for either front-facing or back-facing geometry is met of ``both'' is enabled.

derivative [1] [2]
The object that this shader or phenomenon is attached to (by being named in its material, for example) must have first or second derivatives, respectively, or both. This option has an effect only on free-form surface geometry because cannot compute derivatives for polygons.

object space
The shader functions only if all geometry is defined in object coordinates.

camera space
The shader functions only if all geometry is defined in camera coordinates.

mixed space
The shader functions only if all geometry is defined in camera coordinates, but it is ok if the camera is defined in object coordinates. This is done for certain kinds of walkthrough scenes and is not recommended for general shaders.

smart volume
This flag indicates to mental ray that the declared shader, when used as a volume shader, behaves differently than normal volumes. Instead of calling it after the material shader returns, mental ray will call it instead of the material shader, and it is responsible for calling the material shader with mi_call_material. This improves efficiency for volume shaders that create procedural opaque effects because they have the option of not calling the material shader at all, whose contribution would not be visible, and avoiding secondary rays.


Shader Definitions

In this document, shaders will be used in many places, denoted by the shader metasymbol. A shader is defined as a shading function name followed by parameters:

    "shader_name" (parameters) 
 

This sequence can be inserted for every instance of shader in the rest of this chapter.

(see shader parameters) The shader name must have been previously declared with a declare command; see above. Normally shader libraries containing compiled C shaders come with a $include file that contains all declarations for the shaders in the library. The library itself is typically linked into with a link command. There are usually many shader references for every declaration, each with a unique set of parameters. The syntax of shader calls is described in the chapter on shaders; they are basically a comma-separated sequence of quoted parameter names, each followed by a parameter value.

The parenthesized parameters list is a comma-separated list of shader parameter assignments that have one of the following three forms

    "parameter_name" value 
    "parameter_name" = "shader_name" 
    "parameter_name" = interface "ifname" 
 

The first form assigns a constant value to the parameter. The format of constant values depends on the parameter type:

  
type value
boolean true false integer valueint scalar value vector x y z transform a00 a01 a02 a03 a10 a11 a12 a13 a20 a21 a22 a23 a30 a31 a32 a33 color red green blue red green blue alpha shader "shader_name" color texture "texture_name" scalar texture "texture_name" vector texture "texture_name" light "light_instance_name" struct {parameters} array [comma-separated value list]

Integer values must be signed 32-bit values. All other numerical values are signed floating-point numbers that may contain a decimal point and/or a decimal exponent introduced by the letter e, as in 1.6e-27. The shader_name must be the name of a named shader from a preceding shader statement; the texture_name must be the name of a previously defined color texture, scalar texture, or vector texture statement, respectively.

The special value keyword null can be used to replace any number, symbol, string, true, or false. It stores the numerical value 0 in the parameter. Its main purpose is to create ``holes'' in arrays by listing nulls between the square brackets.

The two non-constant forms of parameter assignment are explained later.

The above shader definition form is also called an anonymous shaders because the shader name/parameter pair is formed on the fly and used in place. Sometimes it is useful to give a name to a shader/parameters pair using a shader statement and use the pair more than once:

    shader "named_shader" 
        "shader_name" (parameters) 
 

Such pairs are called named shaders. After the pair was set up with a shader statement, it can be used in any place where a shader can be used, as an alternative to the anonymous shader statement listed above:

    = "named_shader" 
 

This is especially useful if the same shader/parameters pair is used in many different places in the scene, and it changes for every frame. Since shader statements allow incremental changes, an incremental change to a named shader affects all places that reference it. Without named shaders it would be necessary to incrementally change every scene entity containing the shader.

Shader Lists

In most constructs accepting a shader, shader lists are also accepted. A shader list consist of one or more shader items like one of the two above in sequence. For example, suppose that a named shader has been defined with the following command:

    shader "named_shader2" "shader2" (parameters) 
 

then the following shader list can be written:

    "shader1" (parameters) 
    = "named_shader2" 
    "shader3" (parameters) 
 

This shader list will call three shaders in sequence, shader1, shader2, and shader3, in this order, each with its parameters. All shaders get the same result pointer, so each operates on the results of the previous. A shader list like this can be substituted for all instances of the metasymbol shader_list in this chapter.

Shader lists are maintained by storing a link to the next shader in the previous shader. In the above example, the anonymous shader shader1 contains a link to name_shader2, which contains a link to shader3. This means that once this list is set up, any reference to named_shader2 will implicitly also call shader3 because the link in named_shader2 will remain in the shader until changed in another shader list. This can have surprising results. This is not a problem in anonymous shaders because, not having a name, they cannot be referenced in more than one place. In general it is a good idea to avoid putting named shaders in shader lists.

Shade Trees

Instead of assigning a constant value to a parameter in a shader definition, it is possible to assign a shader:

    "parameter_name" = "shader_name" 
 

For parameters assigned in this way, no value is stored in the shader definition. It is obtained by calling shader_name at runtime. For example, if the ambient parameter of a material shader has the constant value 1 0 0, it is always red, but if another shader is assigned to it that other shader is called when the material shader asks for the value using the mi_eval function or one of its derivatives. The other shader could be a texture shader, for example, resulting in a textured ambient value.

The return type of the assigned shader must agree with the parameter type. If the return type of the assigned shader is struct, it is possible to select a structure member by appending a period and the name of the struct member to the shader name. Consider the following assignment:

     declare shader
        color "phong" (color "ambient",
                       color "diffuse",
                       color "specular")
        version 1
     end declare

     declare shader
        struct {color "a", color "b"}
              "texture" (color texture "picture")
        version 1
     end declare

     color texture "fluffy" "/tmp/image.pic"

     shader "map" "texture" (
        "picture"   "fluffy")

     shader "mtlsh" "phong" (
        "ambient"   0.3 0.3 0.3,
        "diffuse"   = "map.a",
        "specular"  = "map.b")

This defines a material shader that does not support texturing in any way because it has no parameters of type shader or color texture. Yet, shader assignments allow its diffuse and specular components to be textured without the phong shader being aware of it. Whenever phong accesses its ambient parameter value by calling mi_eval, it gets a constant color 0.3 0.3 0.3, but when it accesses its diffuse or specular colors a call to the shader fluffy results, whose result is then returned to the phong shader.

In this example, the fluffy shader returns two colors a and b, which are selected in the shader assignment by appending .a and .b to fluffy. (For this reason periods should be avoided in parameter names.) If the fluffy shader had returned only a single color, only "fluffy" would have been assigned, without appending a period and a structure member name.

In the example, fluffy is assigned twice. Obviously it is not desirable to actually call it twice, because the first call will already have set both its a and b return values. After the first call from a shader, caches the return value to avoid further calls. As soon as the shader phong returns, the cache is discarded to ensure that the next call to phong, most likely with a different state, calls fluffy instead of using a stale cache.

Note that shaders that support shade trees must use the mi_eval function to access their parameters. This was done to ensure that only those assigned shaders whose values are actually used are evaluated. For example, a material shader that has two color parameters, one for the front and one for the back side of the surface, will access only one of its parameters.

To see how the phong shader is implemented as a C shader, see the section ``Parameter Assignments and mi_eval'' in the chapter ``Using and Writing Shaders''.

The advantage of shader assignment is that it is not necessary to write shaders to accept procedural values. Without shader assignments, a simple Phong material shader would need parameters of type shader or color texture in addition to the standard ambient, diffuse, and specular parameters. Shader assignment allows writing small, reusable ``base shaders'' that can be easily combined into powerful shade trees, instead of writing large monolithic shaders that are hard to modify and inflexible to use.

The third form of parameter assigning using the interface keyword is available only inside phenomena, which will be discussed first.

Phenomena

(see declaration) This section only describes the representation of phenomena in the .mi language. The declaration of a phenomenon is very similar to the declaration of a shader, except that the keyword shader is replaced with phenomenon, and the addition of new optional statements in the declaration block:

    declare phenomenon 
        [type] "phenomenon_name" ( 
            type "parameter_name", 
            type "parameter_name", 
            ... 
            type "parameter_name" 
        ) 
        [ version versionint ] 
        [ shader "name" ... 
        [ material "name" ... end material 
        [ light "name" ... end light 
        [ instance "name" ... end instance 
        [ roots ] 
        [ options ] 
    end declare 
 

For a description of version, shader, material, light, and instance definitions, see the corresponding section above; the syntax is identical to the one described there. The options are identical to the options described in the shader declaration section above. The roots are described below.

The phenomenon phenomenon_name declared with this statement is available for the definition of shaders just like a shader declared with a declare shader statement. Named and anonymous shader definitions can be derived from either type of declaration. Phenomena were designed to extend the concept of shaders, not replace it.

Phenomenon Interface Parameters

Phenomena, like shaders, have parameters. In the phenomenon case they are called ``interface parameters'' because they form the gateway between the rest of the scene and the internal implementation of the phenomenon. Interface parameters are what makes the phenomenon look like a simple shader to the named and anonymous shader definitions. Phenomena are implemented in terms of subshader nodes, each with their own parameters. Subshader parameters can be assigned from the interface using an assignment of the following form:

    "parameter_name" = interface "ifname" 
 

This looks similar to the shader assignments described above, but when the shader calls mi_eval on a parameter assigned to the interface of the phenomenon it is defined in, no shader is called but the value is obtained from the phenomenon interface. For example:

     declare shader 
        color "phong" (color "ambient",
                       color "diffuse",
                       color "specular")
        version 1
     end declare

     declare phenomenon
        color "phong_phen" (color "col")
        version 1

        shader "sub" "phong" (
            "ambient"   0.3 0.3 0.3,
            "diffuse"   = interface "col",
            "specular"  1.0 1.0 1.0)

        root = "sub"
     end declare

     shader "mtlsh" "phong_phen" (
        "col"   1.0 0.5 0.0)

(see interface parameter) (see phenomenon interface) For the shader definition, the phenomenon phong_phen looks like a shader with a single color parameter col. Internally, it contains the definition of a simple shader sub with three parameters, two of which have constant values and one which takes its value from the interface. When the shader definition mtlsh is called from a material or elsewhere in the scene, it calls the phenomenon phong_phen with the value 1.0 0.5 0.0 for the interface parameter col. This value is propagated to the diffuse parameter of the shader sub during evaluation of the phenomenon.

It is important to distinguish parameter values, such as 1.0 1.0 1.0 for specular, from shader assignments, which begin with an ``='' sign. In particular, consider the shader parameter type shader: if a shader name is given as the value without ``='' sign, the named shader will be returned but not called by mi_eval. With an ``='' sign, mi_eval will call the shader and expect it to return another shader (so its return value must have type shader) which is then returned by mi_eval. The latter involves an indirection, and is not often used for parameters of type shader. This is a common mistake, and return type mismatches will result in mental ray warning messages.

When calling a phenomenon, all its parameters must pass throughthe interface. The shader sub and everything else defined inside the phenomenon block is visible only inside the phenomenon, and no names defined outside the phenomenon are visible to definitions inside the phenomenon. The interface is the only connection point between the inner and outer world. This encapsulation ensures the integrity and completeness of phenomena independently of the scene they are used in.

Phenomena may also contain material and light definitions in addition to shader definitions.

By convention, anonymous shader definitions should not be used in phenomenon declarations. There is no functional disadvantage in using anonymous shader definitions but it makes life difficult for graphical phenomenon editing tools like mental images' Phenomenon Creator, which uses shader names to label the icons and boxes that represent subshaders in its graph and browser views.

The return type of a phenomenon may be any type that is a valid return type for a shader.


Phenomenon Roots

The above example also illustrates a new option allowed in phenomenon but not shader definitions, the phenomenon root. There are several types of root statements:

    root                    material "material_name" 
    root                    function 
    geometry                function 
    volume                  function 
    environment             function 
    lens                    function 
    output                  function 
    output                  ["type"] "format" "filename" 
    contour store           function 
    contour contrast        function 
    
    volume priority         priorityint 
    environment priority    priorityint 
    lens priority           priorityint 
    output priority         priorityint 
 

All of these are optional. The root statement specifies the primary root shader of a phenomenon that is called when the phenomenon is called. In the above example, the mtlsh shader, when called, calls the phenomenon phong_phen, which immediately calls its subshader sub because sub is specified in its root statement.

The root material variant creates a material phenomenon. This type of phenomenon must be declared with the return type material. It is instanced normally with a shader statement, which provides the interface parameter values. The resulting shader is different from regular shaders; its name can be used everywhere where a material name is valid. A regular phenomenon that does not have type material and no root material statement, when instanced using a shader statement, becomes a shader, not a material. Material phenomena should be used instead of regular phenomena if the phenomenon depends on not only assigning a single shader, such as a material shader, but other material components such as shadow shader, photon shader, volume shader etc. as well. See the description of material statements in the Scene Description Language chapter for an example of a material phenomenon.

In addition to the main root statement, other roots may be defined that reference shaders of other types:

geometry
The geometry shader is evaluated before rendering starts. This allows the phenomenon to introduce procedural geometry into the scene. For example, a light beam phenomenon might install a transparent cone in the scene that bounds the volume effect.

volume
The volume shader or shader list is added to the camera before rendering starts. This allows the phenomenon to specify a global atmosphere that should be installed if the phenomenon is defined.

environment
The environment shader or shader list is added to the camera before rendering starts. This allows the phenomenon to specify a global environment shader that should be installed if the phenomenon is defined. Environment shaders are called when a ray does not intersect any object.

lens
This root is doing the same thing for lens shaders that should be added to the camera before rendering starts.

output
The output shader or shader list, or the file output statement is added to the camera before rendering starts. This allows the phenomenon to specify output shaders that should be installed if the phenomenon is defined. For example, the phenomenon might write a label frame buffer whose values are picked up by the output shader after rendering completes.

contour store
The contour store function used for contour rendering.

contour contrast
The contour contrast function used for contour rendering.

Note that during rendering, only root has any significance because the others have been removed and added to the camera or the scene before rendering started. After rendering and output shading completes, all these changes are undone. Note that if the phenomenon is defined several times (using multiple shader statements or anonymous shader definitions that reference it), the roots other than the main root are added to the scene more than once. Shaders attached directly or indirectly to the output, contour store, and contour contrast roots may not use (see shader assignment) interface or shader parameter assignments.

The priority statements provide control over the the placing of the specified shaders or shader lists in the corresponding shader list in the camera. Shaders with greater priority numbers are appended to shaders with smaller priority numbers, and hence execute later. Shaders with no priority number have priority 0, so they get executed before shaders with positive priority numbers.

Commands

    $include "filename" 
    $include <filename> 
 

The $ sign must appear in column 1 of the line. The named file is included (pasted into) the .mi file, replacing the $include statement. Includes can be nested. The main purpose is to include declarations (see below), but materials, light sources, even objects can be included. The only place where $include cannot be used is between $code and $end code; use the standard C #include statement there. The included file is read on the client host only. If the filename is enclosed in angle brackets, the standard include path is prepended, by default /usr/include. The standard path can be changed with the -I command-line option.

    set "name" ["value"] 
 

Assign a value value to a variable name. Variables are not used by mental ray but provide a general syntax for passing parameters from translators to interactive programs that read scene files without actually parsing any geometric data. For example, translators can store the translator version and name, source scene name, frame range, and other useful information in variables.

    [min] version "string" 
    max version "string" 
 

This commands informs that this .mi file requires the given version. min means ``at least,'' max means ``at most.'' Version strings consist of numbers separated with dots, such as "1.2.3.4". The string can underspecify the version, as in "2.1". Missing numbers are implicitly assumed to be 0 so "2.1" becomes "2.1.0.0". Each number, beginning with the first, is checked in turn. If the number in the string is greater ( min) or less than (max) than the version number built into , an error message is printed and aborts; otherwise the next number is considered. If all given numbers pass the test, continues. This command is recommended for declaration files included with $include. Note: Some versions of require that included files contain version limits because incorrect declarations can have severe consequences.

    verbose on|off|levelint 
 

This command controls verbose messages. There are seven levels: fatal errors (0), errors (1), warnings (2), progress reports (3), informational messages (4), debugging messages (5), and verbose debugging messages (6). All message categories numerically less than level are printed. Verbose off is equivalent to level 2 (fatal errors and errors); verbose on is equivalent to level 5 (everything except debugging messages). Verbose messages can slow down while parsing. The slowdown is significant on Windows NT because of slow scrolling. The verbose command can be overridden with the -verbose command-line option.

    echo "message" 
 

The named message, which must be enclosed in double quotes, is printed to stdout. The echo command is executed synchronously during parsing the .mi file. Echoing requires verbosity level 4 or higher.

    call shader_list [, "camera_inst  " "options"] 
 

(see shader initialization) The given shader is called immediately, and parsing stops until the shader completes. Since the shader is called during parsing and not during tessellation
or rendering,
the entire state passed to the shader is filled with nulls. If the name of a camera instance entity and the name of an options entity is given, state->options and state->camera are set up for the shader. The return value is ignored. The call statement is intended for early initialization of shader packages. Shader init and exit functions are not called.

    system "shell_command" 
 

This command starts a shell, which executes the named shell_command. The shell command must be enclosed in double quotes. waits until the shell command has completed; this can be defeated by ending the command with a shell & command. The system command, like echo, is executed while parsing, not during rendering. Its main purpose is writing finished pictures to an output device.

    code "filename" 
 

(see shader parameters) (see state variables) The named filename is interpreted as a C source file, ending with the extension ``.c'', is compiled and linked into . From this point on, the shaders it defines are available in as shading functions. For example, if the source defines a C function myshader, with the usual three parameters result, state, and paras (see the section on writing shaders for details), the name myshader may be used in materials, lights, textures and so on as the quoted shader name. The command-line option -code provides an alternative way of compiling and linking C source. Multiple code commands are possible. Note that every shading function must also be declared; see below.

    $code 
    C source text 
    $end code 
 

The $ signs must appear in column 1 of the line. This command also compiles and links C source code, but the code is read directly from the .mi file rather than from a separate source file. The C source text follows standard C syntax. In fact, it is written out to a temporary file, which is then compiled as if a code command had been used. Multiple $code commands are allowed. Note that every shading function must also be declared; see below.

    link "filename" 
 

Like the code command, the link command attaches external shaders to , which can then be used as shading functions. While the code command accepts ``.c'' files as filename, the link command expects either object files ending in ``.o'', or dynamic shared object (DSO) files ending in ``.so''. Object files are linked, while DSOs are just attached without any preprocessing. DSOs are the fastest way of attaching an external shader, and require no compilers or development options, which are sometimes sold separately by system
vendors.For system and development software requirements, see the Release and Installation Notes.
However, not all systems support DSOs. The -link command-line option provides an alternative way of linking objects files and DSOs. Any number of files can be linked. Note that every shading function must also be declared; see the description of the declare statement in the Shader Declaration section below. If Dynamical Shared Objects (DSOs) are to be linked on SGI machines, the LD_LIBRARY_PATH environment variable must include the path of the .so file to be linked; see ``Dynamic Linking of Shaders''.

    delete "entity_name" 
 

Delete a named scene entity, such as objects, materials, lights, textures, instances, and instance groups. Declarations and shaders cannot be deleted. It is possible to delete an entity and recreate it with the same name, but this breaks all links. For example, if a light is created and then an instance is created for it, naming the light, the link between instance and light is broken when the light is deleted and recreated. The instance retains a dangling link that will cause havoc during later processing. The delete command should be used only for entities that disappear permanently. All instances and instance groups that contained the name must be updated before the name is deleted.

Instead of deleting and recreating an entity, an incremental change should be used by prefixing the entity definition with the incremental modifier. This has the additional advantage that the entity retains all contents that are not modified during the new incremental definition. For example, an incremental change to a camera containing nothing but a new frame number specification will leave the camera unchanged except for the frame number. As an exception, objects and instgroups are cleared first because merging is not generally useful in these cases.

    render "root_instgroup_name" "  camera_inst_name" "options_name" 
 


This statement renders the scene.
The name of an options entity, a camera instance entity (which must also have been attached to the root instance group), and the root instance group must be given.

Scene Entities

A .mi file contains commands and scene entities in any order, with the restriction that an entity must be defined before it can be referenced. All entities are named; all references are done by name. The following entities can be defined:


     options         options 
     camera          camera: output files, aperture, resolution, etc.
     texture         procedural texture or texture image 
     material        shading, shadows, volumes, environments, contour etc. 
     light           light 
     object          polygonal or free-form surface geometry 
     instance        places objects, lights, cameras, and groups in 3D space
     instgroup       groups instances; the nodes of the scene DAG 
     shader          optional named shaders 
 


All of these can be defined at any place, as long as they are not nested (the definition of an entity must be completed before the next entity can be defined). All these entities can also be incrementally changed by introducing the definition with the incremental keyword, which tells to re-define an existing entity instead of starting a new one. The contents of the existing entity become the defaults for the new one.


options
This entity contains rendering options such as the shadow mode, tracing depth, sampling algorithm and its parameters, acceleration algorithm and its parameters, dithering and other modes.

camera
The camera is a description of the camera characteristics such as focal length and aperture that determine the field of view.

texture
(see texture) Textures to apply
2D or 3D color, bump, transparency,
displacement and other maps to objects. Textures are also referenced by materials only; their interpretation is up to the material shader. Textures may appear verbatim in the file, or may be read from a texture file, or may be fully procedural, or a combination of textures.

material
Materials describe the color, transparency, reflection, and other properties of objects, in addition to listing light instances and textures. Materials name several shading functions for material properties,
shadow casting, environment mapping, volume shading (see volume shader) ,
displacement mapping, as well as a number of property flags.

light
(see light) Lights that illuminate certain objects. Lights are referenced in materials only (through the light instance), they are not applied directly to objects.
All lights name a shading function; there are predefined shaders for points, spots, and directional lights, optionally in combination with various types of area light sources.

object
(see object) Objects describe the actual geometry. Objects reference materials to determine the visual appearance of the surface. Various types of geometry descriptions are supported, such as polygonal objects and free-form surfaces. By default, all objects are defined in a local coordinate space (object space), usually with the origin at the center of the object. Options exist to switch to camera space instead.

instance
(see instance) When an object, light, camera, or instance group is defined, it does not become part of the scene. It must be placed in 3D space with an instance, which contains a transformation matrix, (see inheritance) inherited parameters, and several flags. It is possible to have more than one instance reference the same object, light, camera, or instance group; this is called ``multiple instancing''. The instanced entitity then appears several times in the final processed scene. It is possible to specify a ``geometry shader'' as the instanced item instead of an object. The ``geometry shader'' is expected to create a scene element which is defined in the local coordinate space of the instance.

instgroup
(see instance group) Instance groups are used to group instances so they can be instanced as a unit. Instance groups form the nodes of the scene DAG (Directed Acyclic Graph), which contains all entities used during processing. The instance group at the root of the DAG is called the ``root instance group.'' Since instance groups can themselves be instanced the scene DAG can be built with any number of levels and sub-DAGs, each with its own local coordinate space.

shader
(see named shaders) (see anonymous shaders) Shaders are user-provided functions used for a variety of purposes. They are not normally named; they are defined where they are needed, like in a material. An unnamed shader is also called an ``anonymous shader.'' The shader statement allows giving a name to a shader; a named shader can be used in any place where an anonymous shader is expected by giving its name, introduced by an equals sign.

All scene entities are described in more detail in the following subsections.


Options

    options "name" 
        option_statements 
    end options 
 


Options contain rendering modes. An options entity must be specified to render a scene. There is a variety of option_statements that can be listed in the options. Most of them can be overridden with an appropriate command-line option; see the section Command Line Syntax.
The following option_statements are supported:


Sampling Quality

contrast r g b [ a ]
The contrast controls supersampling. If neighboring samples differ by more than the color r, g, b, a, supersampling is done as specified by the sampling parameters (see below). Default for a is the average of r, g, and b. The recursive supersampling algorithm controlled by samples modifies the contrast based on the recursion level: at sample level 0, the contrast is used directly; at sample level 1, the contrast is doubled (effectively requiring a higher contrast to force another subdivision), and so on. Negative levels divide the contrast, i.e. use a fraction 1 /2, 1 /4, and so on. In general, the contrast is multiplied by 2level at the supersampling level level, which is bounded by samples. The default is 0.1 0.1 0.1 0.1.

This is the primary means of image quality control. Typically values are 0.1 for r, g, b, and a. Values such as 0.2 or 0.3 reduce quality; lower values increase quality. Values less than 0.05 do not further increase quality in most cases. r, g, b, a can be specified separately to allow physiologically correct contrast values; the human eye is much more sensitive to different shades of green than blue and red, and can only poorly distinguish shades of blue. The a value should be set to 1.0 if the matte (alpha) channel is not needed; it is also possible to set a lower than r, g, b to generate matte channels with a higher quality than the color image. If the a value is missing, it is set to the average of r, g, b. Note that for high-quality rendering, the samples parameters must be adjusted.

time contrast r g b [ a ]
The time contrast controls temporal supersampling for motion blurred scenes. It works similar to the spatial contrast parameter explained above: If neighboring samples in time differ by more than the color r, g, b, a, supersampling is done. Default for a is the average of r, g, and b. Using values for time contrast that are higher than contrast can speed up motion blur rendering at the price of more grainy images without degrading the quality of spatial antialiasing. The default is 0.2 0.2 0.2 0.2; much higher than the spatial contrast.

samples [minint] maxint
This statement determines the minimum and maximum sample rate. Each pixel is sampled at least 22 *min times and at most 22 *max times in each direction. If min is 0, each pixel is sampled at least once. Positive values increase the sample rate; negative numbers reduce the sample rate to less than one initial sample per pixel (infrasampling). min must be at least -2, which means that at least one sample per 4 *4 pixels is taken. -2 is the default. If min is chosen too small, small features may be lost if all samples happen to miss it (if it is found just once in any pixel of a task, will analyze the feature and render it correctly).

If a filter camera statement is used to set a filter other than box 1 1, min samples must be set to -1 or greater.

If no min value is given, max - 2 is used by default. The defaults for min and max are -2 and 0, respectively. It is recommended to use max values larger than or equal to min + 2; the difference should not be higher than 3. Typical values for min and max are -2 0 for low-quality preview rendering, -1 1 for medium-quality rendering, and 0 2 or 1 3 for high-quality renders. Note that while this option offers simple control of rendering quality, it is recommended to control quality with the contrast option, which allows much finer control and deals more gracefully with high-contrast cases where the samples option can leave aliasing due to the hard cutoff. The samples statement should be used only as a hard sampling limit.

filter box|triangle|gauss width [height]
The filter statement specifies how multiple samples in recursive sampling mode are to be combined. The filter defaults to a box filter of width and height 1. The box filter can be replaced with a triangle filter or a gaussian filter centered on the pixel. The filter size in pixel units can also be specified; if no height is given the width is used. Larger filter sizes result in softer images. Typical values are 1.0 for box filters, 2.0 for triangle filters, and 3.0 for gaussian filters. The box filter sums all samples in the filter area with an equal weight The triangle filter functions has the shape of a pyramid centered on the pixel. The gauss filter weights the samples using a gauss curve which is cut off at an ellipse centered on the pixel; it has a spherical shape. Large sizes and filters other than box reduce rendering speed. The filter sizes should not be smaller than 1.0. In order to use filters, the min samples statement must be set to -1 or greater, and max samples must be set to 1 or greater, or a warning will be printed and the filter statement is ignored. Note that the sample defaults do not satisfy these conditions. The filter shapes of the three filters are:


filter_box.fig.ps.gif

filter_triangle.fig.ps.gif

filter_gauss.fig.ps.gif


jitter jitter
The jittering factor introduces systematic variations into sample locations. Without jittering, samples are taken at the corners of pixels or subpixels. Jittering displaces the samples by an amount calculated by lighting analysis, limited to jitter pixels. This is used to reduce artifacts. Jittering is turned off by default, or by specifying a jitter of 0.0. Jittering works best in ray tracing mode.
In Quasi-Monte Carlo mode (the default of the -smethod command-line option) the
jitter value is always set to 1.0 if jittering is turned on.

Tesselation Quality

approximate technique [ minint maxint ]
all This statement overrides all approximations for base surfaces (i.e. the surface before applying displacement), free-form surfaces or polygons, in geometric objects. See Approximations in the Objects section, which describes the available approximation techniques. Here is a brief summary of technique, which is a list of one or more of the following:

             view 
             tree 
             grid 
             [ regular ] parametric u_subdiv [ v_subdiv ]
             length edge 
             distance dist 
             angle angle 
             spatial [ view ] edge 
             curvature [ view ] dist angle 
 

Like in object approximation statements, the subdivision limits min and max can be specified how often a triangle can be subdivided. The defaults for min and max are 0 and 5, respectively; 5 is a very high value. In objects, the approximation technique is followed by the surface or curve it applies to; in the options the keyword all indicates that an option approximation overrides all object approximations.

approximate displace technique [ minint maxint ]
all This statement overrides all approximations for displacement maps in geometric objects. Both kinds of approximation overrides are useful for temporarily reducing tesselation quality for previews to reduce tesselation and rendering time without redefining all objects, for example by specifying

             approximate regular parametric 1.0 1.0 0 2 all 
             approximate displace regular parametric 1.0 1.0 0 2 all 
 

Motion Blur

shutter shutter
This statement specifies the shutter open time. A shutter value of 0.0 turns motion blurring off, values greater than 0.0 turn motion blurring on. The shutter value scales the motion vectors attached to object vertices; if shutter is 1.0, each vertex moves the distance given by its motion vector, and is blurred in the image over this distance. Values less than 1.0 reduce this path.

Trace Depth

trace depth reflectint [refractint
[sumint]] reflect limits the number of recursive reflection rays. If it is set to 0, no reflection rays will be cast; if it is set to 1, one level is allowed but a reflection ray can not be reflected again, and so on. Similarly, refract controls the maximum depth of refraction and transparency rays (which implement transparency with and without index of refraction). Additionally, it is possible to limit the sum of reflection and refraction rays with sum. For example, if 3 3 4 is given, an eye ray may be reflected 3 times, or refracted 3 times, or reflected twice and then refracted twice, or any other combination that sums up to at most 4. The defaults are 2 2 4. Note that custom shaders may override these values.

Shadows

shadow off
This statement disables all shadows.

shadow on
Simple shadows are enabled. This is the most efficient and least flexible of the three shadow modes. If shadows overlap because multiple objects obscure the light source, the order in which these objects are considered (and their shadow shaders are called) is undefined. If one object is found to completely obscure the light, no other obscuring objects are considered. This statement turns off shadow sorting and shadow segments. Also see shadowmap motion below.

shadow sort
This shadow mode enables shadow sorting. It is similar to the preceding shadow mode, but ensures that the shadow shaders of obscuring objects are called in the correct order, object closest to the illuminated point first. This mode is slightly slower but allows specialized shaders to record information about obscuring objects. If no such special shader is used, this mode offers no advantage over simple shadow on.

shadow segments
Like with shadow sort, the shadow shaders are called in order. Additionally, shadow rays are traced much like regular rays, passing from one obscuring object to the next, from the light source to the illuminated point. Each such ray is called a shadow segment. This slows down rendering, but is required if (see volume shader) volume effects should cast shadows (such as certain complex shaders like fur and smoke volume shaders).

shadowmap on|off
This flag turns shadow maps on or off for the entire rendering. Shadow map parameters are specified for each light source. The default is off because shadow maps, while often significantly faster, always assume opaque objects.

shadowmap rebuild on|off
Determines whether all shadow maps should be recomputed. If this option is off (the default) shadow maps are loaded from files or reused from previously rendered frames if possible. If this option is on, no shadow map is reused -- everything is recomputed. The default is off.

shadowmap motion on|off
Determines whether shadow maps should be motion blurred such that moving objects will cast shadows along the path of motion. Turning this option off can improve performance of rendering shadow maps slightly faster. The default is on.

Rendering Algorithms

trace on|off
Normally, will use a combination of a scanline algorithm and ray tracing to calculate samples of the scene. If trace off is specified, ray tracing is disabled, and will rely exclusively on the scanline algorithm. Without ray tracing, reflection rays and refraction rays cannot be cast, and lens shaders cannot modify the ray origin and direction. However, scanline transparency that, unlike refractions, does not support changing the direction of the ray based on the index of refraction of the material, and environment maps will work when ray tracing is turned off. Motion blurring and shadows are also affected if ray tracing is turned off. Ray tracing is turned on by default.

scanline on|off
This statement allows turning the scanline algorithm off to force to rely exclusively on ray tracing. This will slow down rendering in most cases. By default, scanline is turned on.

acceleration bsp
Selects the binary space partitioning (BSP) ray tracing algorithm. This algorithm is often, but by no means always, faster. It is controlled by the bsp size and bsp depth statements. The BSP algorithm is the default.

acceleration grid
Selects the grid rendering algorithm. It provides faster proprocessing especially on multiprocessor systems. Memory usage is more conservative than with the BSP algorithm. Speed is comparable to BSP but more scene-dependent. It is controlled by the grid size statement.

acceleration raycl
Selects the ray classification ray tracing algorithm. This algorithm is recommended for very large scenes. It operates with a constant acceleration memory size, controlled by the raycl memory and raycl subdivision statements.

bsp size sizeint
The maximum number of primitives in a leaf of the BSP tree. This statement is used only if binary space partitioning is enabled, it has no effect on the ray classification algorithm. Larger leaf sizes reduce memory consumption but increase rendering time. The default is 4.

bsp depth depthint
The maximum number of levels in the BSP tree. This statement is used only if binary space partitioning is enabled. Larger tree depths reduce rendering time but increase memory consumption, and also slightly increase preprocessing time. The default is 24. If there are too many triangles in the scene to fit into the BSP tree with the size specified by bsp size and bsp depth, the bsp size value is disregarded and larger leaves are created. This slows down rendering significantly. Larger bsp depth values of 30, 40, or even 50 often massively improve rendering speed in BSP mode for larger scenes.

bsp memory memoryint
The maximum memory in megabytes used in the BSP preprocessing. A value of zero indicates that there is no limit on the memory consumption, this is the default. This flag is useful only on multiprocessor machines since the memory consumption increases with the number of rendering threads. When the specified amount of allocated memory is reached, mental ray will prevent threads from being scheduled for preprocessing, thus reducing the memory requirements.

raycl subdivision subdivint subdiv_2dint
If ray classification is enabld, uses a ray tracing algorithm that subdivides the space of all rays. The optimal subdivision is determined automatically by a built-in scene analysis. The raycl subdivision statement can be used to modify the result of this analysis; arguments of 0 leave the calculated subdivision unchanged, positive numbers increase and negative numbers reduce the subdivision. subdiv controls general subdivision; subdiv_2d controls primary (eye) and shadow rays. This option has no effect if the BSP algorithm is used.

raycl memory memoryint
In ray classification mode, this statement sets the amount of memory to be used for acceleration data structures to memory megabytes on each CPU. It has no effect if the BSP algorithm is used. allows presetting the amount of rendering acceleration memory independently of scene complexity without sacrificing speed. The default is set to 6 megabytes, which is sufficient for most scenes. Even for extremely large scenes, little can be gained from memory sizes greater than 12 megabytes. Note that this statement does not affect the amount of memory used for the scene description, which depends on the complexity of the geometry in the current frame.

grid size sizeint
Adjust the scene-dependent default grid voxel size. The default is 1.0. Larger values increase the number of voxels and shrink each voxel accordingly.

Feature Disabling

lens on|off
Ignore all lens shaders if set to off. The default is on.

volume on|off
Ignore all volume shaders if set to off. The default is on.

geometry on|off
Ignore all geometry shaders if set to off. The default is on.

displace on|off
Ignore all displacement shaders if set to off. The default is on.

output on|off
Ignore all output shaders if set to off. The default is on. File output statements are not affected. Note that all five disable options also affect shaders installed by phenomena (see phenomenon) . This means that the phenomenon can fail if it installs cooperating shaders that rely on each other's existence, and one of them is disabled with these options. Phenomenon writers must allow for this case. The purpose of these options is to allow fast preview rendering.

merge on|off
Ignore all merge epsilons and all connections in the scene.

Caustics

(see caustics)

caustic on|off
(see caustics) Caustics are turned on or off. They are off by default. Caustics are lighting effects caused by focusing light rays, such as the irregular light patterns on the floor of a swimming pool. Caustics are also described in the Caustics chapter. Note that caustics are only computed for light sources that specify caustic photons explicitly. The material shader that receives the caustics must also cooperate, and the object to receive caustics must have the caustic flag set.

caustic accuracy N
Controls how many photons should be used to compute the intensity of the caustic. The default is 64. Increasing this number makes the caustics more blurry but also less noisy. A larger number of photons does, however, also increase the rendering time. Decreasing the number of photons has the opposite effect. For fast previewing of caustics it can be useful to use N=20.

caustic filter box|cone [filter_const]
Apply filtering to the caustics to make them sharper. Using a cone filter with the default filter_const of 1.1 in general has the effect that the caustics in the model looks sharper. Increasing the filter_const makes the caustics more blurry and decreasing makes it even sharper but also slightly more noisy. The filter_const must be larger than 1.0.

photon trace depth reflectint [refractint
[sumint]] photon trace depth is similar to trace depth except that it applies to photons. (see photon trace depth) reflect thus limits the number of recursive reflection photons. If it is set to 0, no photons will be reflected ; if it is set to 1, one level is allowed but a photon cannot be reflected again, and so on. Similarly, refract controls the maximum depth of refracted photons. Additionally, it is possible to limit the sum of reflected and refraced photon levels with sum. Note that custom shaders may override these values.

photonmap
filename filename Use filename for the photon map. If the photon map file does not exist, it is created and saved. If it exists, it is loaded and used.

photonmap rebuild on|off
If a filename is specified for the photon map (using the photonmap filename option) it is normally loaded and used if the file exists. If the photonmap rebuild option is true, any existing file will be ignored, and the photon map will be recomputed and an existing file will be overwritten.


Frame Buffer Control

desaturate on|off
If a first-generation material shader returns a color whose RGB components are outside the range [0, 1], will clip the color to a legal range. Negative component values are clipped to 0. If any of R, G, and B exceed 1, they are either set to 1 (if desaturation is turned off), or R, G, and B are faded towards white (if desaturation is turned on). Alpha is always set to 1 if it exceeds 1, or to the maximum of R, G, and B if any of them exceed alpha. Desaturation is turned off by default.

premultiply on|off
Normally uses premultiplied colors, which means that red, green, and blue are stored after being multiplied by alpha. For example, opaque white is (1, 1, 1, 1) and 80% transparent white is (0.2, 0.2, 0.2, 0.2). If premultiplication is turned off, this is not done and 80% white is stored in the frame buffer as (1, 1, 1, 0.2), and written like this to output image files. However, this has no effect on the shader interface and the scene definition, which alway use premultiplied colors regardless of this option. Turning premultiplication off reserves more precision for highly transparent colors, especially if only 8 bits per component are generated. Since the shader interface and the scene definition operate on floating-point numbers there is no need for this there. The default is on.

dither on|off
supports 8, 16, or 32 bits per color component. In some cases, 8 bits per pixel, as supported by all popular picture file formats, can cause visible banding when the floating-point color values calculated by the material shader are quantized to the 8-bit values used in the picture file. Dithering mitigates the problem by introducing noise into the pixel such that the round-off errors are evened out. Note that this can cause run-length encoded picture files to be larger than without dithering. Dithering is turned off by default.

gamma gamma_factor
Gamma correction can be applied to rendered color pixels to compensate for output devices with a nonlinear color response. All R, G, B, and alpha component values are raised to 1 over gamma_factor. The default gamma factor is 1.0, which turns gamma correction off.

field even|odd|off
Field rendering is a technique that allows smooth animations on interlaced video displays. To reduce flicker, video displays first display only every other scanline of the picture, and then the remaining scanlines in a second sweep. Each sweep is called a field. Two fields are one frame. Since sweeps occur at one half of the frame rate, animated objects may have moved between sweeps. Not taking this into account results in rough animations. By default, renders full frames, resulting in a non-interlaced output picture (field off). If field rendering is set to even, every consecutive pair of rendered frames is combined such that the first frame contributes the even scanlines (counted from the top), and the second frame contributes the odd scanlines. This is reversed if field rendering is set to odd. The decision of which frame is the first or second is based on the frame number defined in the camera; the first field has an odd frame number (usually 1). If field rendering is enabled, there must be an even number of frames in the input file.

Scene Geometry

camera space
All geometry is expected to be defined in camera space. Camera space assumes that the camera is at the coordinate origin (0, 0, 0) and looks down the negative Z axis. This means that geometry will typically have negative Z coordinates. This is the default. In camera space mode, instance transformations have no effect.

object space
All geometry is expected to be defined in object space. Each object, light, and camera has its own coordinate space, typically but not necessarily with the coordinate origin (0, 0, 0) in the center of the object. The object coordinate coordinate space is positioned and oriented in world space with the instance transformation matrix (every object, light, and camera requires an instance). Object space allows multiple instancing where the object is placed in the scene more than once using multiple instance entities.


Contours

contour store shader
If the camera contains a contour output statement (see the Contour Rendering section), contour rendering is enabled and a contour store function must be defined. This function stores information about the future contour edge, such as color, depth, normal, and other local information that is later used by the contour contrast function to decide where the contour lines should be drawn, and by contour shaders to decide which colors and thicknesses the contours should have.

contour contrast shader
If contour rendering is enabled, a contour contrast function must also be defined. It decides where the contour lines should be drawn based on values stored by contour store functions. The contrast function compares two such value sets at a time. See the Contour chapter in this manual for details.


Miscellaneous

face front|back|both
The front side of a geometric object in the scene is defined to be the side its normal vector points away from. By specifying that only front-facing triangles are to be rendered, speed can be improved because fewer triangles need to be tested for a ray. This works well unless there are objects whose back side is seen by refracted or reflected rays -- with face front, the back side would not be visible. The default is face both, and works best if (see volume shader) volume effects are used, which usually depend on closed volumes.

task size sizeint
This option specifies the size of the image tasks during rendering. Smaller task sizes are convenient for previewing, but also increase the overall rendering time. This option can also be used in order to optimize load balancing for parallel rendering. If the task_size is not specified, an appropriate default value is used.

inheritance "function_name"
To use inheritance, a user-provided inheritance function must be specified. The function_name is the name of a C function linked to using a link command. No user-defined parameters are passed. The inheritance function is called for every pair of instances of which one is the parent (one level higher up in the scene DAG) of the other. The inheritance function must compute a set of inherited parameters from the parameters stored in these two instances. It is called even for the instances that contain no parameters and for top-level instances; in this case the corresponding parameter argument pointer is zero. Inheritance functions are not regular shaders; they are usually written by translator writers who need to emulate the inheritance methods used by the language to be translated.


Cameras

    camera "name" 
        camera_statements 
    end camera 
 

A camera describes a view of the scene,
including the list of files to write, the lens shaders to use, volume shaders to be used as the global atmosphere or fog, global environment shaders that control what happens to rays that leave the scene, and other parameters.
Cameras are scene entities that need to be placed in the scene with an instance entity. In object space mode (see options entity above), the location of the camera in world space is determined by the camera instance transformation. Note that the camera instance must be attached to the root instance group of the scene. See below for information on instance groups.


Cameras contain output statements that specify output shaders and output files to write to disk, and control which frame buffers creates and maintains during rendering. More than one output file can be created, and output shaders such as filters can be listed that operate on the final rendered image, before it is written to a picture file. outputs is one or more output statements. Output statements are very similar to shader lists, like lens shader statements, but the syntax is different to allow type specifications and output file names:

     output ["datatype"] "filetype" "filename" 
     output "datatype" "shader_name" (parameters) 
 

The first kind writes a picture to a file named filename, using file format filetype. Normally, file formats imply a data type, but the defaults can be overridden by naming an explicit datatype. For example, the file type "pic", which stands for a SOFTIMAGE picture file, implies the data type "rgba".

The second kind of output statement calls an output shader, such as a filter, that may operate on all available frame buffers. Here, the datatype may be a comma-separated list of types if the shader requires multiple frame buffers. Each type can be prefixed with a ``+'' or ``-'' to turn padding on or off. (Padding is interpolation for color, depth, and normal images and max'ing for label images. Padding is on by default for color images and off by default for depth, normal and label images.) For example, a shader that filters the RGBA image with a filter whose size depends on the distance of objects needs both the interpolated RGBA buffer and the interpolated depth buffer, and would have a data type "rgba,+z". creates all types of frame buffers requested by at least one output statement of either kind.

A special data type "contour" can be specified that enables contour rendering. Special contour output shaders must be specified that pick up the contour information from the contour cell frame buffer and compute a color image, which it can either put into the regular color frame buffer, or composite on top of the color frame buffer. In the latter case, one rendering phase creates a color image with contours. The color frame buffer can then be written to an image file using a regular image output statement. There is also a built-in contour output shader that creates a PostScript file instead of a color image. See the Contour chapter in this manual for details and examples.

There is a variety of camera_statements that can be listed in the camera. Some of them can be overridden by specifying an appropriate command-line option; see the section Command Line Syntax.

There are four camera statements that accept shaders: output, lens, volume, and environment. As with all types of shaders, more than one shader can be listed, or more than one such statement can be given, to attach multiple shaders (or output files in the case of the output statement) to each type. In an incremental change (the incremental keyword is used before the camera keyword), each of the four first resets the list from the previous incremental change and does not add to the existing list, as multiple statements inside the same camera ... end camera block would.

The following camera_statements are supported:

focal distance | infinity
The focal distance is set to distance. The focal distance is the distance from the camera to the viewing plane. The viewing plane is the plane in front of the camera that the rendered scene is projected onto; its edges correspond to the edges of the rendered image. However, objects between the camera and the viewing plane will still be rendered; a common approach is to place the viewing plane in the middle of the interesting objects in the scene and then set the aperture such that it is a bit larger than the horizontal extent of the objects in camera space. If infinity is used in place of the distance, an orthographic view is rendered. An orthographic view turns off perspective, all camera rays are parallel. View-dependent surface tessellation is not possible in orthogonal mode.

aperture aperture
The aperture is the width of the viewing plane. The height of the viewing plane is aperture divided by aspect. Together with the focal and aspect viewdefs, aperture defines the lens of the camera.

aspect aspect
This is the aspect ratio of the camera. The default is 1.33. In camera space, aperture is the width of the viewing plane and aperture divided by aspect is the height. The viewing plane is divided into pixels as specified by the resolution viewdef, so the aspect will result in nonsquare pixels if it is not equal to the X resolution divided by the Y resolution. For example, to render a PAL image at a resolution of 720 *576 pixels, at an image ratio of 3:4 as defined by the PAL standard, pixels are slightly wider than tall, by a factor of 576 /720 *4 /3 = 1.0667. If the aspect ratio is corrected by this number, objects will appear undistorted on a PAL video display (but not on a computer display with square pixels).


resolution xint yint
Specifies the width and height of the output image in pixels.

window x_lowint y_lowint
x_highint y_highint Only the sub-rectangle of the image specified by the four bounds will be rendered. All pixels that fall outside the rectangle will be left black.

clip hither yon
The hither (near) and yon (far) planes are planes parallel to the viewing plane that delimit the rendered scene. Points outside the space between the hither and yon planes will not be rendered (this does not apply to the infinite-radius environment maps because they are not geometric objects). The clip statement specifies the distance of the hither and yon planes from the camera. The defaults are 0.0001 for the hither distance and 1000000.0 for the yon plane.

volume [shader_list]
This statement specifies volume (atmosphere) shaders (see volume shader) . The atmosphere affects all rays passing through the space outside objects by attenuating the color of the ray. It is possible to specify a volume shader for the inside of objects too, by naming a volume shader in the material statement (see above). If no shader_list is specified, the existing volume shader list is deleted; this is useful in incremental changes. If a list is given, it replaces the current list if this is the first volume statement in the camera block, or it is appended to the current list otherwise.

environment "environment_shader_name"
(parameters) This statement specifies environment shaders. Environment shaders control the color returned by primary rays that, after leaving the camera, never strike any object in the scene. They are similar to environment shaders named in materials, which control reflection rays cast by the material shader that leave the scene without striking another object (or exceeding the trace depth). If no shader_list is specified, the existing volume shader list is deleted; this is useful in incremental changes. If a list is given, it replaces the current list if this is the first volume statement in the camera block, or it is appended to the current list otherwise.

lens "lens_shader_name"
(parameters) Lens shaders (see lens shader) simulate lenses by changing the camera. If no lens shader is present, the camera is a pinhole camera that casts rays from the origin in all directions that pass through the viewing plane, or an orthographic camera that casts parallel rays if focal infinity is specified. A lens shader accepts the origin and direction of the camera ray, modifies them, and casts a new camera ray. Examples for lens shaders includes fish-eye lenses that exaggerate the direction vector in a nonlinear way (there is a code example in the Shader section). Multiple lenses can be specified in the camera; the n-th lens shader receives the origin and direction computed by the n-1st lens shader. If no shader_list is specified, the existing volume shader list is deleted; this is useful in incremental changes. If a list is given, it replaces the current list if this is the first volume statement in the camera block, or it is appended to the current list otherwise.

frame frameint [time] [ field fieldint ]
Every camera should contain the current frame number frame. The time in seconds can optionally be specified as time. Optionally, a field number field can be specified; by convention, field should be 0 when rendering frames, 1 when rendering the first (odd) field, and 2 when rendering the second (even) field (see field rendering) . If the field modifier is missing, the field number defaults to 0. mental ray currently does not use any of these values but makes them available to shaders.


Textures

    scalar texture "texture_name" [widthint heightint [depthint] ] bytes ... 
    [ local ] [ filter [scale_const]] scalar texture "texture_name" "filename" 
    scalar texture "texture_name" shader_list 
 

    color texture "texture_name" [widthint heightint [depthint] ] bytes ... 
    [ local ] [ filter [scale_const]] color texture "texture_name" "filename" 
    color texture "texture_name" shader_list 
 

    vector texture "texture_name" [widthint heightint ] bytes ... 
    [ local ] vector texture "texture_name" "filename" 
    vector texture "texture_name" shader_list 
 

Textures are lookup functions. They come in two flavors: lookups of two-dimensional texture or picture files or literal bytes, and procedural lookups. File textures require a file name parameter or a byte list; procedural textures require a shading function parameter. There are three types of texture functions: textures computing scalars, colors, and vectors. Which one is chosen depends on what the texture is used for. Textures are used as parameters to other shaders, typically
material shaders. A material shader could, for example, use a color texture to wrap a picture around an object, or a scalar texture as a transparency or displacement map, or a vector texture as a bump map. The actual use of the texture result is entirely up to the shader that uses the texture. The built-in SOFTIMAGE material shader soft_material, for example, uses arrays of color textures only.

All of the above syntax variations define a texture texture_name. The texture_name should be quoted to avoid reserved words and to allow non-alphabetic characters. This is the name that the texture will later be referenced as.

Non-procedural textures can be defined by specifying the width and height of the texture and an optional depth (bytes per component, 1 or 2, default is 1), followed by a list of width * height *depth hexadecimal two-digit bytes, most significant digit first if depth is 2, in RGBA order for colors and UV order for vectors. Note that the brackets around the sizes are literally part of the .mi file, while the skinny brackets around depth denote that the depth is optional and are not part of the .mi file.

Non-procedural textures can also be defined by naming a texture or picture
file; for a list of allowed file formats, see the section on Available Output File Formats. In this case, the sizes (width, height, and depth) are read from the file. If the local keyword is not present, the file is read once on the master host and then transmitted over the network to all slave hosts that participate in the rendering. With the local keyword, only the file name is transmitted to the slave hosts; this requires the filename to be valid on all slave hosts but reduces network transfer times drastically if many texture files or very large texture files are used. Maximum speed improvements are achieved if filename is not on an NFS-mounted file system (NFS stands for Network File System, distinguishable by the nfs type in the output of the Unix df command).

The filter keyword, if present, enables texture filtering based on texture pyramids, a technique comparable to s. during rendering. Filtered textures are preprocessed before rendering begins and use approximately 30% more memory. Filtering should be used when the texture is large and seen at a distance, such that every sample covers many texture pixels. Without filtering, widely spaced samples ``overlook'' the areas between the samples; filtered textures perform a filter operation to take the skipped areas into account. The compression of the texture on the viewing plane can be scaled by the optional scale value if necessary.

(see memory-mapped textures) When loading a texture image, it is checked whether the texture is memory-mappable. This is the case if the texture file has the special uncompressed .map format. If this is the case, the texture is not loaded into memory but mapped into virtual memory. Memory-mapped textures use no physical RAM and no swap space, but they use virtual memory. Memory mapping should be used for large textures that are not used often (i.e., many or most of its pixels are not sampled or the textured object is small or far away from the camera).

Procedural textures are defined by naming a shading function with parameters; the shading function can either be one of the built-in functions or an external function from a code or link command.

When the
material
shader (or any other shader) evaluates a texture by calling a texture evaluation function, the program either looks up non-procedural shaders by looking up the texture in the range [0, 1) in each dimension, or it calls the named shader in the procedural case. The shader is free to interpret the point for which it evaluates the texture in any way it wants, two-dimensional or three-dimensional.


Materials


     material "material_name" 
         [opaque] 
         shader_list 
         [displace [shader_list]] 
         [shadow [shader_list]] 
         [volume [shader_list]] 
         [environment [shader_list]] 
         [contour [shader_list]] 
         [photon [shader_list]] 
         [photonvol [shader_list]] 
     end material 
 


Materials determine the look of geometric objects. They are referenced by material_name in the geometry definition in object statements (see below). Lights and textures cannot be referenced by objects; they are referenced by the material which uses them to compute the color of a point on the object's surface. All built-in material shaders accept textures and light instances as shader parameters.

When a primary ray cast from the camera hits an object, that object's material shader (the first, mandatory, shader_list) is called. The material shader then calculates a color (and certain other optional values such as labels, depths, and normals that can be written to special output files). This color may then be modified by the optional volume shader if present. The resulting color is stored in the output frame buffer, which is written to the output picture file when rendering has finished. In order to calculate the color, the material shader may cast secondary (see secondary ray) reflection, refraction, or transparency rays, which in turn may hit objects and cause other (or the same; multiple objects may share a material) material shaders to be called. The material shader bases the decision whether to cast secondary rays on its parameters, which are part of the scene description and may contain parameters such as the material's diffuse color or its reflectivity and transparency, light instances, and textures. The parameters depend entirely on the material shader. In this sense, material shaders are ``primary'' shaders that get help from ``secondary'' texture and light shaders.

It is possible to specify a shader type such as shadow without following it with a shader_list. This is useful if an incremental change is done to the material. The incremental change leaves the contents of the material undisturbed, so the shadow shader list remains intact. It can be replaced by specifying a new one, but it can only be deleted with a shadow keyword not followed by any shaders. In an incremental change, the first statement (say, volume) first resets the old volume list; every subsequent volume statement in the same material block adds to the list.

(see shader parameters) The material_name should be quoted to avoid reserved names or if it contains non-alphabetic characters. The opaque flag, if present, informs that this material is not transparent (i.e., it does not cast refraction or transparency rays and always sets its alpha result value to 1); this allows certain optimizations that improve rendering speed. The material shader and its parameters are mandatory.

There are several optional functions that can be listed in a material. The displacement shader is a function returning a scalar that displaces the object's surface in the direction of its
normal.In version 1.9, displacements are possible only on free-form surfaces, which must have a sufficiently fine approximation to reveal details of the displacement map. Polygons and polygon meshes, as well as free-form surfaces, can be adaptively displaced in version 2.0 of .

The shadow shader is called when a shadow calculation is done, and the shadow ray from the light source towards the point in shadow intersects with this material. The shadow shader then changes the color of the ray, which is initially the (possibly attenuated) color of the light to another color, typically a darker or tinted color if the material is colored glass. It returns black if the material is totally opaque, which is also the default if no shadow shader is present. Shadow shaders are usually reduced versions of the material shaders; they evaluate transparencies and colors but cast no secondary rays.

It is possible to use the material shader as a shadow shader; material shaders can find out whether they are called as material or shadow shaders and do only the required subset in the latter case. The built-in soft_material shader is written this way. This is done by naming the material shader after the shadow keyword, and giving no parameters (i.e., giving ()). will notice the absence of parameters and pass the material parameters instead. If the shadow shader has no parameters of its own, it is not defined whether it receives a pointer to the material shader parameters, or a pointer to a copy of the material shader parameters.

A volume shader affects rays traveling inside an object. They are conceptually similar to fog or atmosphere shaders of other rendering programs. When a ray (either from the eye or from a light source) hits this material, the volume shader, if present, is called to change the color returned by the ray based on the distance the ray has traveled, and atmospheric or material parameters. A volume shader can also be named in the camera (see camera) (see above); that shader is used for rays traveling outside objects. It is the material shader's responsibility to determine inside and outside of objects.

The environment shader is called when a reflection or refraction ray cast by the material shader leaves the scene entirely without striking another object. There is a built-in environment shader soft_env_sphere, for example, that maps a texture on a sphere with an infinite radius surrounding the scene. (This is another example for an application of a texture; a texture name must be used as a parameter for the soft_env_sphere shader for this to work.) The camera statement also offers an environment shader; that shader is used when the ray leaves the scene without ever striking any object (or exceeding the trace depth).

If a contour shader is given, it is called when contours are enabled with an appropriate output statement in the camera entity, and certain contour store and contour contrast shaders are specified in the options entity. For more information on contour rendering see the contour chapter in this manual.

(see photon tracing) (see caustics)
If caustics
computation is enabled, the photon shader is called during a preprocessing stage (before rendering) to determine the light distribution in the scene. Like shadow shaders, photon shaders without parameter lists are called with the material shader parameter lists.
See the chapter on caustics in this manual for more details.

A volume photon shader affects photons traveling inside an object. When a photon hits this material, the volume photon shader, if present, is called to trace the photon through the volume. (see material phenomenon) Note that materials can be replaced with phenomena (see phenomenon) . In all places where the name of a material may be given, the name of a shader that references a phenomenon declaration of type material is legal. Given the following scene fragment:

     declare phenomenon
          material "phen_mtl" (color "param")
          material "mtl" opaque
               "shader" ("diffuse" = interface "param")
          end material
          root material "mtl"
     end declare

     shader "mtl_sh" "phen_mtl" ("param" 1.0 0.7 0.3)

the name mtl_sh can be used like a material_name, for example in polygon or free-form surface definitions in objects. For more information on material phenomena, see the Phenomena section of the Writing Shaders chapter.

Note that there are three ways to use material shaders in a scene:

  • Every polygon and surface in an object specifies a material. They can be omitted but default to the previous polygon or surface in the object, but this is only a syntactical simplification. Effectively, every polygon and surface brings its own material, and no (see inheritance) material inheritance takes place.

  • If the object is marked tagged (see tagged flag) , polygons and surfaces do not specify materials but integer labels. In this case the inherited material in the instance is used. That material can use mi_query to obtain the label and modify its behavior based on the label.

  • As a variation of the previous method, the instance may specify not a single material but a (see material list) list of materials. In this case, mental ray will use the n-th material in the list if the label of the intersected primitive is n (or the first material if the label is greater than or equal to the number of materials in the list).

See the description of instances for more details on material lists and material inheritance.


Lights


(see shadow map) (see photon tracing)
(see caustics) Lights have a large number of optional parameters that are used if
caustics or shadow maps are enabled. These techniques use a preprocessing step that analyzes how light travels through the scene. Lights that participate in this preprocessing stage must specify a number of extra parameters. For clarity, regular lights and more specialized lights are shown separately:

     light "light_name" 
         shader_list 
         [ area_light_primitive ] 
         [ origin x y z ] 
         [ direction dx dy dz ] 
         [ spread spread ] 
         [ visible ] 
     end light 
 

     light "light_name" 
         shader_list 
         [ area_light_primitive ] 
         [ origin x y z ] 
         [ direction dx dy dz ] 
         [ spread spread ] 
         [ visible ] 
         [ tag labelint ] 
 
         [ energy r g b ] 
         [ exponent exp ] 
         [ caustic photons photonsint ] 
 
[ shadowmap [ on|off ]] [ shadowmap resolution resint ] [ shadowmap samples numint ] [ shadowmap softness size ] [ shadowmap file "filename" ] end light


(see light) This statement defines a light source. All light sources need a light
shader, such as the built-in soft_light or mi_wave_light shaders, or a shader linked with a code or link command (see above). "shader" above stands for the quoted name of the shader. Like any other shader, a parameter list (see shader parameters) enclosed in parentheses must be given. The parameters depend on the particular shader; they include the light color, attenuations, and spot light directions. The declaration of the shader determines which parameters are available in the parameters list; see the section ``User Parameter Declarations'' for details on shader parameters. distinguishes three kinds of light shaders: point lights, giving off light in all directions; directional (infinite) lights, whose light rays are all parallel in a particular direction and spot lights which emit light from a point along a certain direction. Point lights must define an origin but no direction, while directional lights must define a direction but no origin. Spot lights must define an origin, a direction, and a spread. The spread defines the maximum angle of the cone defined along the direction in which the spot produces illumination. The value of spread is the cosine of this maximum angle; it must be between 0 and 1. Spot lights often use a directional attenuation, but this is purely a function of the shader that is independent of the spread and direction keywords in the light definition. All types of lights can be turned into area light sources.

After the definition, the light source can be instanced with an instance statement that references light_name. The instance can then be referenced in parameter lists of shading functions (such as a material shading function) by listing the light instance name. Material shaders normally have an array parameter accepting one or more light instances, which they loop over to accumulate the contribution by each light (unless they rely solely on the global light list. Light instances are one of the standard data types that are available for shading function parameters. The light_name may be quoted to avoid clashes with predefined words, and to allow non-alphabetic characters.


Any point or spot light may be turned into an area light source by naming an area_light_primitive. Area light sources generate soft shadows because shadow-casting objects may partially obscure the light source. Four types of area light primitives are supported:

     rectangle x0 y0 z0 x1 y1 z1 sampling 
     disc x y z radius sampling 
     sphere radius sampling 
     cylinder axis radius sampling 
 

The common sampling substatement is optional:

     [ u_samples v_samples [ level [ low_u_samples low_v_samples ]]] 
 

All three area light types are centered at the origin position in the light definition. A rectangular area light is specified by two vectors from the center to two edges; a disc area light is specified by its normal vector and a radius; a sphere area light is specified only by its radius; and a cylinder area light is specified by its axis and radius. Note that the orientation of the rectangle, disc, or cylinder are independent of the direction and any directional attenuation the shader applies. Also note that the ends of the cylinder are not sampled.

The u_samples and v_samples parameters subdivide the area light source primitive. For discs and spheres, u_samples subdivides the radius and v_samples subdivides the angle. For a cylinder, u_samples subdivides the height and v_samples subdivides the angle. When sampling the area light source, samples one point in each subdivision at a location precisely determined by the sample parameters and a predefined lighting distribution, and then combines the results. The default is 3 for each sample parameter, so an area light source without explicitly given samples parameters is sampled 9 times.

If the optional level exists and is greater than 0, then mental ray will use low_u_samples and low_v_samples instead of u_samples and v_samples, respectively, if the sum of the reflection and refraction trace level exceeds level. The defaults for the low levels are 2. The effect is that reflections and refractions of soft shadows are sampled at lower precision, which can improve performance significantly. Since shaders have control over the trace level in the state, they can influence the switching depth, which can be used to sample soft volume shadows less precisely, for example.

Light sources are by default invisible. However, area lights can be made visible by adding a visible flag to the light. Any visible flags on point lights are ignored since points have no area. Light visibility cannot be inherited from the instance.

A label integer can be attached to a light using the tag statement. Labels are not used by mental ray in any way, but a shader can use mi_query to obtain the label of a light and perform light-specific operations.


(see caustics)
The second light form is for caustics.
It requires specification of the light energy. The light energy is given as an RGB triple to allow colors, but the RGB values are typically much higher than the usual 0...1 range for colors. The number of photons emitted from this light source in the preprocessing step is determined by photons. Physical correctness demands an 1 /r2 power law for energy falloff, causing the energy received from a light source to fall off with the square of the distance to the light source. However, the exponent parameter allows modification of the power law to 1 /rexp. For any exp other than 2, physical correctness is lost, but for achieving certain looks it is often useful to use exp values between 1 and 2 to reduce the falloff, and better approximate classical local illumination non-physically correct lights.

Caustics require specification of a caustics photons value that controls the number of samples taken during caustics preprocessing.
Typical values range from 10,000 to 100,000; larger values improve accuracy and reduce blurriness.

Shadow maps are controlled per light source using the information about the light source type and the information provided by the shadow map keywords. Shadow maps are supported only for spot lights with a cone-angle less than 90 degrees (i.e.spread > 0) and for directional lights. A shadow map is activated for a light source by specifying the shadowmap keyword. The resolution of the shadow map which controls the quality and also the amount of memory used is specified with the shadowmap resolution keyword, which specifies the width and height of the shadowmap depth buffer in pixels. The shadowmap softness and shadowmap samples keywords determine the type of shadow produced with the shadow map; if the softness is zero a sharp shadow is generated. If softness is larger than zero it specifies the size of the region in the shadow map in which the shadow map samples are placed, in pixels. This can be used to generate soft shadows. The number of samples determines the quality of the soft shadow and in general the number of samples should be increased when the softness is increased. The shadowmap file keywords can be used to specify a shadow map file in which the shadow map will be saved the first time it is rendered and subsequently loaded the following times it is being used. If the shadows in the scene change, the old shadow map files should be deleted to prevent loading and re-use of outdated shadow maps.

For spot light sources, the extent of the shadow map is determined by the spread parameter. For directional light sources, the extent of the shadow map is determined by the extent of the parts of the scene that cast shadows. For example, in a scene with small objects on a large background polygon, the small objects casting shadows should have a shadow flag, while the background polygon should not. Then the extent of the shadow map will only cover the small objects that cast shadows. If the large background polygon also has the shadow flag, the extent of the shadow map will be larger, and the shadow map will lack detail at the small objects where detailed shadows are needed.


Objects

All geometry is specified in either camera space or object space, depending on the corresponding statement in the options statement (see above). In camera space mode, the camera is assumed to sit at the coordinate origin and point down the negative Z axis, and objects are defined using camera space coordinates. In object space mode, the camera location is determined by its instance, and objects are defined in local object coordinates that are positioned in the scene with the object instance. Every object requires an instance.

The appearance of the object, such as color and transparency, is determined by naming materials in the object definition. Before a material can be used in an object, it must be defined; see above for details. Naming the material determines all aspects of the object's appearance. No further parameters, textures, or lights need to be specified; they are all part of the material definition.

The two most common approaches to materials and objects are to name all materials first and then all objects, which may simplify the implementation of material editors because all materials may be put in a separate file and then included in the .mi file using a $include command (see include command) ; or materials and objects may be interspersed. Either way, each material definition precedes its first use.

All polygonal and free-form surface objects have the same common format in the .mi file:


     object "object_name" 
         [ visible ] 
         [ shadow ] 
         [ trace ] 
         [ tagged ] 
         [ caustic [mode]] 
         [ tag label_numberint ] 
         [ basis list ] 
         group 
             [ merge epsilon ] 
             vector list 
             vertex list 
             geometry list 
             approximation list 
         end group 
         ...             # more groups 
     end object 
 


The individual parameters are:

  • The object name object_name serves to uniquely identify the object. The name is not used by in any form except to give meaningful progress reports and error messages. Object names should be enclosed in double quotes to disambiguate them from reserved words.


  • The visible flag causes the object to be visible. Most objects will have this flag set. Not setting it will make the object invisible to primary rays (those originating from the camera), which means it will disappear from the image.

  • The shadow flag causes the object to cast shadows. The standard case is specifying both the visible and shadow flags. If an object is very complex, it may be desirable to set only the visible flag but not the shadow flag, and create a second object that resembles the first one but is much simpler and set the shadow but not the visible flag on it. The effect is that the object appears unchanged, but shadow calculations see a much simpler shadow object that casts about the same shadow as the primary visible object would.

  • The trace flag serves a similar purpose as the shadow flag. Normally, it is always set along with the visible and shadow flags. It controls whether the object is visible to secondary (reflected or refracted) rays (see secondary ray) . If the reflecting or refracting objects are fuzzy or only slightly reflective or refractive, it may result in a considerable speedup to make the reflection and refraction rays see a much simpler object than primary rays would. Like with the shadow flag, this is achieved by not setting the trace flag in the primary, high-definition object, and create a second one that roughly resembles the primary object that has the trace flag (and perhaps the shadow flag) but not the visible flag set.

  • The tagged flag changes the way geometry is stored. Normally, every polygon, surface, and triangle comes with its own optional material. If an object specifies no materials, the material is inherited (see material inheritance) from the instance. Objects marked tagged do not store materials at all and always rely on the inherited instance material, and permit specifying a non-optional label integer in place of the material in polygon and surface definitions. (Non-optional means that a tagged object must contain one label integer for every polygon and surface.) This label can be accessed by shaders during rendering (i.e. not in displacement or output shaders) using the miQ_GEO_LABEL mode of the mi_query function. The idea is that a shared material distinguishes parts of the object by label integer, instead of attaching a different material to each polygon or surface.

  • The caustic flag (see caustics) lets the object cast caustics. A caustic is an illumination effect caused by light that undergoes specular reflections or refractions before it hits a diffuse object. For example a water surface casts irregular light patterns on the floor of a swimming pool. To simulate this effect, the water surface and floor objects must have the caustic flag set, the material shader of the floor object must be written to pick up caustic light, and the light source must contain a caustic photon and contain appropriate energy and photon settings. Refer to the Caustics chapter in this manual for details.

The mode argument controls the caustic operation: 1 enables caustic casting, 2 enables caustic receiving, 3 enables both and 0 neither. In the pool example, the water surface should have mode 1 and the floor should have mode 2. If the caustic keyword is given without mode argument, the mode defaults to 3. If no caustic keyword is given caustics default to mode 0.

  • The tag specifies an arbitrary 32-bit number that identifies the object. normally uses the term label, the keyword tag is retained for backwards compatibility. The term tag is used by to identify entries in the scene database; this is of concern only to shader writers. By specifying an appropriate output statement in the camera (see above), it is possible to cause to write a label file that, for each pixel, contains the label code of the object that the camera ray hits first. Exactly which label is stored is under the control of the material shader; using the label of the foremost object regardless of reflections and refractions is merely the default behavior.

  • The basis list is a list of bases to be used in free-form surface descriptions. Only curves and surfaces use bases. For a list of supported bases, see below. The defined bases can be used in all groups that follow, until the end of the object.

  • Finally, a list of object groups contains the actual geometric representation. The decision whether to put all geometry into a single object group or to use different groups for different geometric entities is of importance only for free-form surfaces. If two surfaces appear in the same group, they may be connected if the appropriate connect statements are used (see below) or if adjacency detection (edge merging) takes place. Surfaces in different groups cannot be connected. Connecting surfaces means that they will be approximated (see approximation) such that there is no crack between them; they will form one continuous combined surface in the range of the connection. The merge epsilon that can be specified in the group determines the maximum gap between two surfaces that still leads to the automatic connection of both surfaces. The smaller the epsilon, the closer any two surfaces must be to become merged. The results of the automatic edge merging may strongly depend on a judicious choice of the merge epsilon. By default, the merge epsilon is 0.0 and no edge merging is computed. Generally, it is recommended to use only a single object group in any object and create multiple objects instead.

At the end of each object group, approximation statements can be given that control the tessellation method. They are valid for both polygonal and free-form surface object groups. In polygonal object groups, the approximation is used only for polygons whose material contains a displacement shader. Free-form surfaces are always controlled by their approximations; see below for details.

(see visible flag) (see shadow flag) (see trace flag) (see caustics) The visible, shadow, trace, and caustic flags can be overridden by the instance using the standard inheritance mechanism. Instances can specify that a flag in the instanced object is turned on or off, or that it is left unchanged. The object flags are used only if all the instances from the root of the scene DAG down to the object all leave the flag unchanged.

Object groups contain the actual geometry. All geometry needs vector lists and vertex lists. The vector list contains 3D vectors that can describe points in space, normals, texture vertices, basis vectors, or motion vectors. Vectors are anonymous, they are triples of floating-point numbers separated by whitespace without inherent meaning. They are numbered beginning with 0. Numbering restarts at 0 whenever a new object group starts.

also accepts a compressed binary format for vectors. Instead of three floating-point numbers, a sequence of 12 bytes enclosed in backquotes is accepted. These 12 bytes are the memory image of three floats in IEEE 854 format, using big-endian byte order. This format is intended for increasing translation and parsing speed when ray is connected to a native translator; it should not be used in files modified with text filters. Many filters and editors refuse to accept files containing binary data, or corrupt them.

Vertices build on vectors. In the .mi format, there is no syntactical difference between polygon vertices and control points vertices for free-form surfaces; both are collectively referred to as ``vertices'' in this discussion. All vertices define a point in space and optional vertex normals, motion vectors, and zero or more textures and basis vectors:

    v   indexint 
    [ n indexint ] 
    [ d indexint indexint [ indexint [ indexint indexint ] ] ]
    [ t indexint [ indexint indexint ] ]
    [ m indexint ] 
    [ u indexint 
    ... 
 

v
specifies the point in space,

n
specifies the vertex normal (ignored when the vertex is used as a curve or surface control point),

m
specifies the motion vector (the distance the point moves during the shutter open time specified in the options statement), and

t
specifies a texture vertex with an optional X/Y basis vector pair for bump map calculation. There can be up to 64 t lines for any given v vertex. (The texture and basis vectors are ignored when the vertex is used as a curve or surface control point. Texture and basis information for surfaces is defined using a ``texture surface'', described below.)

d
specifies a first and/or second surface derivative. First derivatives describe the UV parametric gradient of a surface; second derivatives describe the curvature. They are normally defined on surfaces but can be given for polygons with the d keyword. If d is followed by two indices they are taken to reference the first derivative dP du and dP dv (with P being the point in space); if three indices follow they are taken to reference the second derivative d2 P du2, d2 P dv2, and d2 P du dv; and if five indices follow the first two describe the first derivative and the next three the second derivative. Derivatives are not used by , they are made available to shaders only.

u
specifies a user vector. No constraints are imposed on user vectors. does not operate on them in any way, they are passed through with the vertex and can be picked up by the shader.

Every vertex begins with a v statement and ends with the next v statement or with the start of the geometry description. If the vertex is to be used as a control point, it is not meaningful to specify a vertex normal or any other optional vector except motion vectors. All occurrences of index above index into the vector list; 0 is the first vector in this group. References of different types (for example, v and n) may not reference the same vector. As stated before, all vectors are 3D. If the third coordinate is not used (as is the case for 2D texture vertices, for 2D curve control points, and for 2D surface special points) it should be set to 0.0. If both the second and third coordinates are unused (as is the case for 1D curve special points), they should be set to 0.0.

Vertices themselves are numbered, independently of vectors. The first vertex in every group is numbered 0. The geometry description is referencing vertices by vertex index, just like vertices are referencing vectors by vector index. This results in a three-stage definition of geometry:

1.
List of vectors

2.
List of vertices

3.
List of geometry

The reason for this three-stage process is that it allows both sharing of vectors and sharing of vertices. This is best illustrated with an example. Consider two triangles ABC and ABD sharing an edge AB. (This example will use the simplest form of polygon syntax that will be described later in this section.) The simplest definition of this two-triangle object is:

    object "twotri" 
        visible 
        group 
            0.0 0.0 0.0 
            1.0 0.0 0.0 
            0.0 1.0 0.0 
            1.0 0.0 0.0 
            1.0 1.0 0.0 
            0.0 1.0 0.0 
 

            v 0 
            v 1 
            v 2 
            v 3 
            v 4 
            v 5 
 

            p "material_name" 0 1 2 
            p 3 4 5 
        end group 
    end object 
 

The first three vectors are used to build the first three vertices, which are used in the first triangle. The remaining three vectors build the next three vertices, which are used for the second triangle. Two vectors are listed twice and can be shared:

    object "twotri" 
        visible 
        group 
            0.0 0.0 0.0 
            1.0 0.0 0.0 
            0.0 1.0 0.0 
            1.0 1.0 0.0 
 

            v 0 
            v 1 
            v 2 
            v 1 
            v 3 
            v 2 
 

            p "material_name" 0 1 2 
            p 3 4 5 
        end group 
    end object 
 

The order of vector references is noncontiguous to ensure that the second triangle is in counter-clockwise order. Two vertices are redundant and can also be removed by sharing:

    object "twotri" 
        visible 
        group 
            0.0 0.0 0.0 
            1.0 0.0 0.0 
            0.0 1.0 0.0 
            1.0 1.0 0.0 
 

            v 0 
            v 1 
            v 2 
            v 3 
 

            p "material_name" 0 1 2 
            p 1 3 2 
        end group 
    end object 
 

The need for sharing both vectors and vertices can be shown by specifying vertex normals:

    object "twotri" 
        visible 
        group 
            0.0 0.0 0.0 
            1.0 0.0 0.0 
            0.0 1.0 0.0 
            1.0 1.0 0.0 
            0.0 0.0 1.0 
 

            v 0 n 4 
            v 1 n 4 
            v 2 n 4 
            v 3 n 4 
 

            p "material_name" 0 1 2 
            p 1 3 2 
        end group 
    end object 
 

In this last example, both vector sharing and vertex sharing takes place. The normal is actually redundant: if no normal is specified, uses the polygon normal. Defaulting to the polygon normal is slightly more efficient than interpolating vertex normals, if vertex normals are explicitly specified.

Two types of geometry can be contained in the geometry list, polygonal geometry and free-from surfaces. In the next sections the syntax of the definitions of polygonal geometry and free-form surfaces is described and illustrated by examples.

An object group permits only one type of geometry, either polygons or surfaces but not both. It is recommended that objects contain only a single object group, so normally objects are either polygonal or surface objects but not both at the same time. Also, vector sharing is supported only for vectors of similar types (point in space, normal, motion, texture, basis vector, derivative, or user vector. A vector may not be referenced by vertices once as a point in space and once as a normal, for example.




Polygonal Geometry

Polygonal geometry consists of polygons. For efficiency reasons, distinguishes simple convex polygons from general concave polygons or polygons with holes. Both are distinguished by keyword:

    c ["material_name"] vertex_ref_list 
    cp ["material_name"] vertex_ref_list 
    p ["material_name"] vertex_ref_list 
    p ["material_name"] vertex_ref_list hole vertex_ref_list ... 
 

If the enclosing object has the tagged flag (see tagged flag) set, mandatory label integers must be given instead of the optional materials:

    c label_numberint vertex_ref_list 
    cp label_numberint vertex_ref_list 
    p label_numberint vertex_ref_list 
    p label_numberint vertex_ref_list hole vertex_ref_list ... 
 

The c keyword selects convex polygons without holes. The results are unpredictable if the polygon is not convex. The cp keyword is a synonym for c for backwards compatibility; c should be used in new translators. The p keyword also renders concave polygons correctly, and allows specification of holes, using one or more hole keywords, each followed by a vertex_ref_list. If all polygons within the same object group are simple convex polygons containing three sides (i.e. triangles), will pre-process them in a more efficient manner than non-triangular polygons.

A vertex_ref_list is a list of non-negative integers index that reference vertices in the vertex list of the group described in the previous section. The first vertex in the vertex list is numbered 0.

Any vertex index can be used in both polygon and hole vertex_ref_lists. A polygon with n vertices is defined by n index values in the vertex list following the material name. The order of the polygon vertices is important. A counter-clockwise ordering of the vertices yields a front-facing polygon (see back culling) . The vertex list of a hole may be ordered any way.

The material name must have been defined before the object definition that contains the polygon definition, in a statement like

    material "material_name" 
        ... 
    end material 
 

In both cases, it is recommended to quote the material name to avoid conflicts with reserved words, and to allow arbitrary characters in the name. For a detailed description of material definitions, see the section on materials above. Once a material name has been specified for a polygon, it becomes the default material. All following polygons may omit the material name; polygons without explicit material use the same material as the last polygon that does have an explicit material. Not specifying materials improves parsing speed.

If no material is specified, polygons remain without material; in this case the material from the closest instance up the scene DAG is used instead. This is called material inheritance. Tagged objects always inherit their material from the instance. It can distinguish polygons by using the miQ_GEO_LABEL mode of the mi_query function during rendering (not in displacement shaders).

The tessellation of polygons assumes that polygons are ``reasonably'' planar. This means that every polygon will be tessellated, but the exact subdivision into triangles does not attempt to minimize curvature. If the curvature is low, different tessellations cannot be distinguished, but consider the extreme case where the four corners of a tetrahedron are given as polygon vertices: the resulting polygon will consist of two triangles, but it cannot be predicted which of the four possible triangles will be chosen.

The behavior will be different for convex polygons without holes ( c keyword) and polygons which contain holes or are concave (p keyword). Convex polygons without holes are triangulated by picking a vertex on the outer loop and connecting it with every other vertex except its direct neighbors. If polygons are not flagged by the c keyword but do not have any holes an automatic convexity test is performed and if they are indeed convex they are triangulated as described. Convex polygons with holes and concave polygons are triangulated by a different algorithm. In any case a projection plane is chosen such that the extents of the projection of the bounding box of the (outer) loop have maximal size. If the projection of the polygon onto that plane is not one-to-one the results of the triangulation will be erroneous.

If a textured polygon's material contains a displacement map the vertices are shifted along the normals accordingly. If an approximation statement is given triangles are subdivided until the specified criteria are fulfilled; see the section on approximations for details.


Free-Form Surface Geometry

Free-form surfaces are polynomial patches of any degree up to twenty-one.The algorithms used impose no inherent limit. The limit may be increased in future versions.Supported basis types include Bézier, Taylor, B-spline, cardinal, and basis-matrix form. Any type can be rational or non-rational. Patches can be explicitly or automatically connected to one another, or may be defined to contain explicitly defined points or curves in their approximation. Various approximation types including (regular) parametric, spatial, curvature-dependent, view-dependent, and combinations are available. Surfaces may be bounded by a trimming curve, and may contain holes.

Surface geometry, like polygonal geometry, is defined by a series of sections. An object containing only surface geometry follows this broad outline:


     object "object_name" 
         [ visible ] 
         [ shadow ] 
         [ trace ] 
         [ caustic [mode]] 
         [ tag label_numberint ] 
         [ basis list ] 
         group 
             [ merge epsilon ] 
             vector list 
             vertex list 
             [ list of curves ] 
             surface 
             [ list of surface derivative requests ] 
             [ list of texture or vector surfaces ] 
             ...                         # more surfaces 
             [ list of approximation statements ] 
             [ list of connection statements ] 
         end group 
         ...                             # more groups 
     end object 
 


Curves, surfaces, approximations, and connections may be interspersed as long as names are defined before they are used. For example, a curve must come before the surface it is trimming, and an approximation must come after the surface to be approximated. Texture and vector texture surfaces must always directly follow the surface they apply to. The individual sections are:

  • The basis list must be specified at the beginning of the object definition, just before the group begins. Bases defined in this section are referenced by name in the curve and surface definitions to specify their degrees and types (Bézier, B-spline, etc.).

  • The vector list in the group is a list of (x, y, z) vectors used to build control points later. This section is the same as the vector section used to build vertices for polygonal geometry.

  • The vertex list that follows the vector list builds control points out of the vectors. This also works like the vertex list for polygonal geometry, except that no normals and texture vertices can be defined here (i.e., no n, t, or d statements may appear). Normals are defined implicitly by the surface, and textures are defined by texture surfaces instead as described below. Surface derivatives are generated if derivative keywords are present. (see surface derivative) Rational curves and surfaces specify additional weights at each vertex reference (see below).

  • Curves may be defined and used as trimming curves, hole curves, and special curves. This section is optional; if no trimming curve is defined surfaces are untrimmed and end at the boundaries specified by the ranges of the bases used. Trimming a surface means to cut away portions that fall outside an outer boundary curve; holes cut away portions inside the hole curve. Special curves are curves that are always included in the tessellation; they can be used to define features like sharp creases that need to be tessellated consistently. Surfaces may also be connected along trimming curves.

  • The surface geometry list consists of surface statements, much like polygonal geometry that consists of p and c statements. A surface is defined by a surface statement, optionally followed by surface derivative request statements and one or more texture surface or vector surface statements.

  • Approximation (see approximation) statements give additional information about how a surface and its trimming, hole and special curves are to be approximated. Various modes such as parametric, regular parametric, curvature-dependent, and view-dependent approximations can be selected, along with the precision. If there are approximation statements in the options statement (see Options, Tesselation Quality above), they override any approximation statements in the objects.

For a description of vector lists and vertex lists, refer to the Object section above. Note that no vector or vertex should be shared between both polygonal and surface geometry statements. That is, no vertex, and no vector used for building it, may be used as a polygon vertex and a surface control point at the same time. It is recommended but not required that an object contains only polygonal or surface geometry but not both, because there is a slight speed reduction if has to sort out which vertex and vector is used for which geometry.

Bases

When surfaces and curves are present within an object group, it is mandatory that at least one basis has been defined within the object. Bases define the degree and type of polynomials (denoted by Ni,n below) to be used in the description of curves or surfaces. Curves and surfaces reference bases by name. Every surface needs two bases, one for the U and one for the V parameter direction. Both can have a different degree, but must have the same type (for example, rational Bézier in U and Cardinal in V is not allowed). There are five basis types:

    basis "basis_name" [ rational ] taylor degreeint 
    basis "basis_name" [ rational ] bezier degreeint 
    basis "basis_name" [ rational ] cardinal 
    basis "basis_name" [ rational ] bspline degreeint 
    basis "basis_name" [ rational ] matrix degreeint stepsizeint basis_matrix 
 

A parametric representation may be either non-rational or rational as indicated by the rational flag. Rational curves and surfaces specify additional weights at each control point. This flag is optional; it can also be specified in the curves and surfaces that reference the basis.

The degree specifies the degree of the polynomials used in the description of curves or surfaces; recall that the degree of a polynomial is the highest power of the parameter occuring in its definition. When bases of degree 1 are used control points are connected with straight lines. Cardinal bases always have degree 3. The degree and the type combined determine the length of the parameter vector and the number of control points needed for the surface. The meaning of the parameter vector differs for the different basis types. This is described in detail below.

The supported polynomial types for curves and surfaces are bezier, bspline, taylor, cardinal and matrix.

taylor specifies the basis functions:

Ni,n(t) = ti

bezier specifies the basis functions:

Ni,n(t) = n over iti(1-t)n-i

cardinal specifies third degree curves and surfaces . The Cardinal splines, also known as Catmull-Rom splines, are most easily formulated as a conversion from Bézier form. If we let Bi,3(t) be the cubic Bézier basis functions (i.e., the above basis functions Ni,n(t) with n=3), then we may write the cardinal basis functions as



N0,3 = -1/6 B1,3(t)
N1,3 = B0,3(t) + B1,3(t) + 1 /6 B2,3(t)
N2,3 = 1 /6 B1,3(t) + B2,3(t) + B3,3(t)
N3,3 = -1 /6 B2,3(t)

bspline specifies a non-uniform B-spline representation whose basis functions are given by the following recursive definition:

Ni,0(t) = 1 if xi <=t < xi+1

0 otherwise

and

Ni,n(t) =
t-xi /xi+n-xi Ni,n-1(t)
+ xi+n+1-t /xi+n+1-xi+1 Ni+1,n-1(t)

where, by convention, 0/0 = 0. (x0,...,xq) is known as the knot vector. It must be specified through the parameter lists when using B-spline bases in curves and surfaces (see below). A matrix (bi,j)0 <=i <=n,0 <=j <=n specifies the basis functions:

Ni,n(t) = Sumj=0nbi,jtj

When a curve or surface is being evaluated and a transition from one segment or patch to the next occurs, the set of control points (the `evaluation window') used is incremented by the stepsize. The appropriate stepsize depends on the representation type expressed through the basis matrix and on the degree.

Suppose we are given a curve with k control points {v1, ..., vk}. If the curve is of degree n, then n+1 control points are needed for each polynomial segment. If the stepsize is given as s, then the (1+i)th polynomial segment, will use the control points {vis+1,...,vis+n+1}. For example, for Bézier curves s=n, whereas for Cardinal curves s=1. For surfaces, the above description applies independently to each parametric dimension. The basis_matrix specifies the basis functions used to evaluate a parametric representation. For a basis of degree n the matrix must be of size (n+1)*(n+1). The matrix is laid out in the order b0,0, b0,1, ... , b0,n, ... , bn,n.

Note that the generalization to the rational case for all representations is admitted in all cases.



As an example, an object containing a nonrational Bézier surface of degree 3 in one parameter direction and degree 1 in the other parameter direction needs two bases defined at the beginning of the object like this:

    object "mysurface" 
        visible 
        basis "bez1" bezier 1 
        basis "bez3" bezier 3 
        group 
            ... 
 

The surface definition will reference the two bases by their names, bez1 and bez3.

Surfaces

A surface specifies a name and a list of control points. For both parametric dimensions it specifies a basis, a global parameter range, and a parameter list. Optionally, it specifies surface derivative requests, texture surfaces, trimming curves, hole curves, special curves and special points. Special curves and points are included as edges and vertices in the approximation (triangulation) of the surface.

        surface "surface_name" "material_name" 
            "u_basis_name" range u_param_list 
            "v_basis_name" range v_param_list 
            hom_vertex_ref_list 
            [ derivative_request ] 
            [ texture_surface_list ] 
            [ surface_specials_list ] 
 

If the enclosing object has the tagged flag (see tagged flag) set, a label integer must be given instead of a material name. See the discussion for the polygon case above. This changes the first line of the preceding syntax block to:

        surface "surface_name" label_numberint 
 

The bases used in the definition of the surface must have been defined in the basis list of the object. The are referenced by their basis_names. Their ranges consist of two floating-point numbers specifying the minimum and maximum parameter values used in the respective direction.

The parameter_lists in the basis specifications define the number of patches of the surface and the number of control points. For bases of the types taylor, bezier, cardinal and matrix such a parameter_list consists of a strictly increasing list of at least two floating-point numbers. For bspline bases the parameter_lists specify the knot vector. If the B-spline basis to be used is of degree n the knot vector (x0,...,xq) must have at least q+1=2(n+1) elements. Knot values represent a monotone sequence of floating-point numbers but are not necessarily strictly increasing, i.e. xi <=xi+1. Moreover, they must satisfy the following conditions:

 (1)    x0 < xn+1  
 (2)    xq-n-1 < xq  
 (3)    xi < xi+n for 0<i<q-n-1  
 (4)    xn <=tmin < tmax <=xq-n  
 

where [tmin,tmax] is the range over which the B-spline is to be evaluated. Equation (1) demands that no more than n + 1 parameters at the beginning of the parameter list may have the same value. Equation (2) is the same restriction for the end of the parameter list. Equation (3) says that in the middle of the parameter list, at most n consecutive parameters may have the same value. To generate closed B-spline curves, it is often necessary to write a parameter list where the first n and last n parameters in the list produce initial and final curve segments that should not become part of the curve; in this case equation (4) allows choosing a start and end parameter in the range bounded by the first and last parameter of the parameter list.

The number of control points per direction can be derived from the number of parameters p, the degree of the basis n, and the step size s. Their total number can be calculated by multiplying the numbers taken from the following table for each of the U and V directions.

  type          min # of parameters     # of control points 
 
Taylor 2 (p - 1) *(n + 1) Bézier 2 (p - 1) *n + 1 cardinal 2 p + 2 basis matrix 2 (p - 2) *s + n + 1 B-spline 2(n+1) p - n - 1

Note that only certain numbers of control points are possible; for example, if the U basis is a degree-3 Bézier, the number of control points in the U direction can be 4, 7, 10, 13, and so on, but not 3 or 5. For B-spline bases of degree 3 the minimum number of parameters is 8 corresponding to 4 control points.

Each vertex reference in the hom_vertex_ref_list is an integer index into the vertex list of the current group in the object (index 0 is the first vertex). When the surface is rational, homogeneous coordinates must be given with the control points, by appending a floating-point weight to every vertex reference integer in the hom_vertex_ref_list. There are two methods for specifying weights: either a simple floating-point number that must contain a decimal point to distinguish it from an integer index, or the keyword w followed by a weight value that need not contain a decimal point. Weights are used only if the surface is rational but ignored otherwise. If a weight in a rational surface is missing, it defaults to 1.0. The surface specials list is used to define trimming curves, hole curves, special curves, and special points (vertex references). A surface may be further modified by approximation and connection statements, as described below.

For example, an object with a simple degree-3 Bézier surface can be written as:

     object "mysurface"
          visible
          basis "bez3" bezier 3
          group
               0.314772   -3.204608  -7.744229   # vector 0
               0.314772   -2.146943  -6.932366
               0.314772   -1.089277  -6.120503
               0.314772   -0.031611  -5.308641
               -0.660089  -2.650739  -8.465791   # vector 4
               -0.660089  -1.593073  -7.653928
               -0.660089  -0.535407  -6.842065
               -0.660089  0.522259   -6.030203
               -1.634951  -2.096869  -9.187352   # vector 8
               -1.634951  -1.039203  -8.375489
               -1.634951  0.018462   -7.563627
               -1.634951  1.076128   -6.751764
               -2.609813  -1.543000  -9.908914   # vector 12
               -2.609813  -0.485334  -9.097052
               -2.609813  0.572332   -8.285189
               -2.609813  1.629998   -7.473326

               v 0     v 1     v 2     v 3       # vertices
               v 4     v 5     v 6     v 7
               v 8     v 9     v 10    v 11
               v 12    v 13    v 14    v 15

               surface "surf1" "material"
                       "bez3"  0.0 1.0   0.0 1.0
                       "bez3"  0.0 1.0   0.0 1.0
                       0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
          end group
     end object

First, 16 vectors are defined, each of which is used to build one vertex (control point). Next, a surface is defined that uses basis bez3 for both the U and V parameter directions. Since the surface is built from only one 4 *4 Bézier patch, the parameter vector after the basis range has only length 2. If there had been two patches in the U direction and three in the V direction, the bases would have been referenced as

                "bez3"  0.0 1.0 0.0 0.5 1.0 
                "bez3"  0.0 1.0 0.0 0.33333 0.66667 1.0 
 

Alternatively, the parameter vector may be given as

                "bez3"  0.0 2.0 0.0 1.0 2.0 
                "bez3"  0.0 3.0 0.0 1.0 2.0 3.0 
 

by changing the parameter range of the basis. This has no influence on the geometry of the surface, but generates UV texture coordinates in a different range (here, [0.0, 2.0] *[0.0, 3.0]). However, a different parametrization does affect the texture surface range (see below), and the range of trimming, hole, and special curves (which do not define their own ranges but borrow the range from the surface they apply to).

The optional surface_specials_list that completes the surface definition is a sequence of trimming curves, hole curves, special curves, and special points as described in the next section.

For a complete example including approximations and connections, refer to the end of this chapter.

Surface Derivatives

can automatically generate surface derivative vectors if requested. First derivatives describe the UV parametric gradient of a surface; second derivatives describe the curvature. They are computed and stored only if requested by derivative_request statements in the surface definition:

        derivative numberint [ numberint ] 
 

There can be one or more derivative statements that request first and/or second derivatives. Valid values for number are 1 and 2, for first and second derivatives, respectively.

does not use derivative vectors but makes them available to shaders. First derivatives are presented as two vectors (dP du and dP dv, with P being the point in space); second derivatives are presented as three vectors (d2 P du2, d2 P dv2, and d2 P du dv). This is the same format that can be explicitly given for polygonal data using the d keyword in vertices. Surfaces always compute the vertex derivatives analytically, explicit vertex derivatives given by d keywords are ignored.

Texture Surfaces

A plain surface statement defines the geometry of the surface. If a texture is to be mapped on the surface, it is necessary to include texture surfaces. A texture surface defines a mapping from raw UV coordinates to texture coordinates as used by shaders. A vector texture is a variation of a texture surface that additionally defines a pair of basis vectors; it is used for bump mapping.

The texture or vector texture directly following a surface defines texture space number 0, the next defines texture space number 1, and so on, exactly like the first t statement after the v statement in a vertex used for building polygonal geometry defines texture space number 0, the next t defines texture space number 1, and so on. Basically, texture and vector texture surfaces replace the t statements used by polygonal geometry, because attaching textures to control points that usually are not part of the surface is not useful.

Texture spaces is what ends up in the state->tex_list array where it can be accessed by texture shaders to decide which texture is mapped which way. Texture space 0 is the first entry in that array, which is used by the shader for the first texture listed in the texture list in the material definition. In general, there is one texture space per texture on a material, although shaders making nonstandard use of texture spaces could be written.

The syntax for texture surfaces is a simplified version of geometric surfaces. The texture_surface_list in the grammar summary at the beginning of the ``Surfaces'' section above expands to zero or more copies of the following block:

        [ volume ] [ vector ] texture 
            "u_basis_name" u_param_list 
            "v_basis_name" v_param_list 
            vertex_ref_list 
 

Unlike geometric surfaces, no surface name and material name is given. Bases are given like in geometric surfaces. Texture surfaces use the ranges of the geometric surface they are attached to, they are not repeated in the texture surface basis statements. The vertex_ref_list follows the same rules as the geometric surface's vertex_ref_list. Texture surfaces have no specials such as trimming curves or holes.

The optional volume keyword in the texture surface definition disables seam compensation. It should be used for 3D textures where each texture vector should be used verbatim. If the volume flag is missing, the tessellator detects textures that span the geometric seam on closed surfaces, and prevents rewinding. Consider a sphere with a 2D texture that is shifted slightly in the U parameter direction: a triangle might have u0 = 0.0 on one side and u1 = 0.1 on the other side. If the texture is shifted towards higher u coordinates by 0.05, u0 and u1 will map to texture coordinates t0 = 0.95 and t1 = 0.05, assuming an otherwise normal UV mapping. Even though u0 < u1, t0 >> t1, causing a fast ``rewind'' of the texture. Seam compensation corrects t1 to 1.05. This is undesirable for 3D textures, which should have the volume keyword set. Most problems with strangely shifted textures are caused by inappropriately used or missing volume keywords.

The optional vector keyword in the texture surface definition is a flag that causes bump basis vectors to be calculated during tessellation. This flag must be used if the texture surface is used for a bump map; all built-in shaders supporting bump maps expect such a pair of bump basis vectors.

For a geometric surface S that maps parameters (u,v) into an object's coordinates (x,y,z) and a texture surface T that maps the same parameters into texture coordinates (s,t), the bump map basis vectors are the derivatives


d(S o T-1) / ds
and d(S o T-1) / dt

of the composite map S o T-1 from the texture coordinates into object coordinates. They are not normalized and not necessarily orthogonal to each other.

The normal perturbation as suggested by Blinn (cp. [Watt 92] sec. 6.4, pp. 199--201) is given by




D=M' - M
=( dH / ds) A - ( dH / dt) B

with




A = N * d(S o T-1) / dt
B = N * d(S o T-1) / ds

where


N = M / |M |

is the normalized surface normal with


M = d(S o T-1) / ds *
d(S o T-1) / dt

and H a height field defining the bump map, usually the intensity of the picture stored in the texture.

This is an example for the simplest of all texture surfaces, a bilinear mapping:

    object "mysurface" 
        visible 
        basis "bez1" bezier 1 
        basis "bez3" bezier 3 
        group 
 

            16 vectors used for the surface 
            0.0 0.0 0.0         # vector number 16 
            0.0 1.0 0.0         # vector number 17 
            1.0 0.0 0.0         # vector number 18 
            1.0 1.0 0.0         # vector number 19 
 

            16 vertices used for the surface 
            v 16                # vertex number 16 
            v 17                # vertex number 17 
            v 18                # vertex number 18 
            v 19                # vertex number 19 
 

            surface "surf1" "material" 
                "bez3"  0.0 1.0 0.0 1.0 
                "bez3"  0.0 1.0 0.0 1.0 
                0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 
 

                texture 
                    "bez1"  0.0 1.0 
                    "bez1"  0.0 1.0 
                    16 17 18 19 
        end group 
    end object 
 

This texture surface defines a bilinear mapping from the UV coordinates computed during surface tessellation to the texture coordinates. To define other than bilinear mappings, the texture surface needs to have more control points than just one at every corner of the surface. Whenever the surface tessellator generates a triangle vertex, it uses the UV coordinate of that vertex to look up the texture surface and interpolate the texture coordinate from the nearest four points of the texture surface. The resulting texture coordinate is stored with the vertex and becomes available in state->tex_list when the
material shader is called because a ray has hit the surface.

If more than one texture surface is given, one texture coordinate is computed for each texture surface and stored in sequence in the generated triangle vertices. Each texture surface is said to define a ``texture space''. They are available in the state-> tex_list array in the same order. The number and order of texture surfaces should agree with the number and order of textures given in the texture list in the material definition. (Note that not all material shaders support multiple textures.)

If the material name of a surface is empty (two consecutive double quotes), the surface uses the material from the closest instance (this is called material inheritance).

Curves

Curves are two-dimensional parametric curves when they are referenced by surfaces. They are used as trimming curves, hole curves, and special curves. They must be defined before the surface which references them. Curves are three-dimensional parametric curves when referenced by spacecurves. A curve is defined as:

        curve "curve_name" "basis_name" 
            parameter_list 
            hom_vertex_ref_list 
            [ special special_point_list ] 
 

The parameter_list of a curve is a list of monotonically increasing floating-point numbers that define the number of segments of the curve and the number of control points. Curve parameter lists work very much the same way as surface parameter lists except that no range needs to be provided, because they are supplied by the surfaces that reference the curve under consideration as explained in the next section. For details on parameter lists, see the sections on bases and surfaces above.

Each vertex reference in the hom_vertex_ref_list is an integer index into the vertex list of the current group in the object (index 0 is the first vertex), optionally followed by a floating-point weight. Weights are used only if the curve is rational, they are ignored otherwise. If a weight in a rational curve is missing, it defaults to 1.0. The vertices indexed by the integers in the hom_vertex_ref_list should have no normals or textures (no n and t statements), and the third component of the vector (v statement) should be 0.0 because curves are defined in UV space, not 3D space.

The optional special_point_list specifies points that are included in the approximation of the curve. After the special keyword, a sequence of integers follows that index into the vertex list, just like the integers in the hom_vertex_ref_list. The first component of the vector is used as the t parameter; it forces the point on the curve at parameter value t to become part of the curve approximation. Of course t must be in the range of parameters allowed by the surface definition.



Trimming, Hole, and Special Curves; Special Points

A surface may reference curves to trim the surface, to cut holes into it, and to specify ``special curves'' that become part of the tessellation of the surface. Special points in surfaces work like special points in curves, except that they provide a point in the parameter range of the surface, i.e. a two-dimensional UV coordinate, rather than a one-dimensional curve parameter. They specify single points on the surface that are to be included in the tessellation. As all curves and points are in UV space, the third component of the vectors provided for them is ignored. None of the above types of curves and points may exceed the range of (0.0, 1.0) at any point.

No two curves may intersect each other, and no curve may self-intersect. This is an important point, because trimming curves and holes that are not closing or intersecting themselves or other loops are hazardous for the tessellation routines.

(see trimming curve) Trimming, hole, and special curves and special points are defined at the end of the surface definition. The curves are composed of segments from the list of curves of the surface's group. The surface_specials_list given in the previous section is a list of zero or more of the following four items:

    trim    "curve_name" min max 
            ... 
    hole    "curve_name" min max 
            ... 
    special "curve_name" min max 
            ... 
    special vertexint 
            ... 
 

The dots indicate that each trim, hole, and special statement may be followed by more than one curve segment or vertex, respectively.

The vertex integers specify vertices from the vertex section of the current group in the current object. Such a vertex specifies the UV coordinate of the special point that is to be included in the tessellation.

Each of the three types of curves references a curve that has been defined earlier with a curve statement. If a single trim, hole, or special statement is followed by more than one curve, the resulting trimming, hole, or special curve is pieced together by concatenating the given curves. The min and max parameters allow using only part of the curve referenced. min and max must be in the range of the parameter vector of the curve which in turn must be mapped into the parameter range of the surface. The min and max parameters of two different curve pieces are independent, they only depend on the curve parameter lists. For example, a trimming curve can be built from two curves, using the first three quarters of the first curve and the last three quarters of the second curve:

    curve "trim1"
            "bez1" 0.0 1.0 2.0 3.0 4.0
            0 1 2 3 4

    curve "trim2"
            "bez1" 0.0 1.0 2.0
            3 5 0

    surface "patch1" "mtl"
            "bez3" 0.0 1.0        0.0 1.0
            "bez3" 0.0 1.0        0.0 1.0
            6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
            trim "trim1" 0.0 3.0
                 "trim2" 0.5 2.0

Both trimming curves use the basis bez1, which is assumed to be a degree-1 linear curve. Hence, trim1 connects the UV vertices 0, 1, 2, 3, and 4 with straight lines, and trim2 connects the vertices 3, 5, and 0. If these two curves are put together by the trim statement in the surface definition, all parts of the surface that fall outside the polygon formed by the UV vertices 0, 1, 2, 3, and 5 are trimmed off. The trim2 curve includes vertex 0 to close the trimming curve. Holes (see hole curve) and special curves are constructed exactly the same way. Trimming curves and holes must form closed loops but special curves are not restricted in this way.

Note that trimming and hole curves must be listed in the correct order, outside in. If there is an outer trimming curve, it must be listed first, followed by the holes. If a hole has a hole, the inner hole must be listed after the outer hole. Since curves may never intersect, there is always an unambiguous order - if a curve A encloses curve B, A must be listed before B. Curves that do not enclose one another can be listed in any order.

This example omits the vector and vertex parts of the group in the object. Here is an example that defines a complete object containing a surface with a trimming curve that precisely follows the outer boundary. A trimming curve that follows the outer surface boundary does not actually clip off any part of the surface, but it is still useful if surfaces are to be connected, because connections work on trimming curves.

    object "mysurface" 
        visible 
        basis "bez1" bezier 1 
        basis "bez3" bezier 3 
        group 
            16 vectors used for the surface 
            0.0 0.0 0.0         # vector number 16 
            1.0 0.0 0.0         # vector number 17 
            1.0 1.0 0.0         # vector number 18 
            0.0 1.0 0.0         # vector number 19 
 

            16 vertices used for the surface 
            v 16                # vertex number 16 
            v 17                # vertex number 17 
            v 18                # vertex number 18 
            v 19                # vertex number 19 
 

            curve "trim1" 
                "bez1" 0.0 0.25 0.5 0.75 1.0 
                16 17 18 19 16 
            surface "surf1" "material" 
                ... 
                trim "trim1" 0.0 1.0 
        end group 
    end object 
 

The trimming curve in the example is linear, using a degree-1 Bézier basis. This means that the parameter vector has five equally-spaced parameters, one for each corner in counter-clockwise order and back to the first corner to close the trimming curve. Trimming and holes always require a closed curve or sequence of curves (they can be pieced together by multiple curves as long as the pieces form a closed loop together). The results are undefined if trimming or hole loops are not closed, or intersect.

If the trimming curve would be a degree-3 Bézier going through four corner points, a parameter vector with 3 *5 + 1 = 16 parameters would be required (again, the 5 is the number of corners visited including the return to the first to close the curve).

For details on the parameter vector following the basis name in the definition of the curve, refer to the section on curves above. The bases and parameter vectors for curves and surfaces follow the same rules, except that curves have no explicit range; they always use the implicit range given by the parameter list.



Approximations

Approximations are defined within an object group and they specify how previously defined polygons, surfaces and curves should be tessellated. When the keyword approximate is directly followed by an approximation technique it refers to a polygon or a list of polygons. It only has an effect on displacement mapped polygons. Within an object group containing free-form surface geometry the approximation statements are given separately for the surface itself and for curves used by the surface. The surface approximation statement sets the approximation technique for the surface itself. (see displacement shader) If it carries a displacement map this statement refers to the underlying geometric base surface and does not take the displacement into account. One may specify the approximation criteria on the displaced surface with an additional displace approximation statement or even leave out the surface approximation statement altogether.

If the material of the surface does not contain a displacement shader the displace approximation statement is ignored. A trim approximation statement applies to all trimming, hole and special curves attached to the given surface or surfaces collectively; it is equivalent to separate curve approximations for each individual curve. If the options statement specifies approximation statements for base surfaces and/or displacements, they overrides the approximation statements in the object. This can be used for quick previews with low tesselation quality, for example.

    approximate  
        technique [ minint maxint ] 
 
    approximate surface 
        technique [ minint maxint ] "surface_name" ...
 
    approximate displace 
        technique [ minint maxint ] "surface_name" ...
 
    approximate trim 
        technique [ minint maxint ] "surface_name" ...
 
    approximate curve 
        technique [ minint maxint ] "curve_name" ...
 

The dots indicate that there may be more than one surface_name or curve_name following the approximation statement. The given approximation is then used for all named surfaces or curves.

technique stands for one or more of the following:

        view 
        tree 
        grid 
        [ regular ] parametric u_subdiv [ v_subdiv ] 
        length edge 
        distance dist 
        angle angle 
        spatial [ view ] edge 
        curvature [ view ] dist angle 
 

tree and grid are mutually exclusive. parametric cannot be combined with any of the others except grid, which is the default for the parametric case anyway. regular can only be used together with parametric. view has no effect unless one of length, distance, spatial, or curvature is also given.

View-dependent approximation is enabled if the view statement is present. It controls whether the edge argument of the length and spatial statements, and the dist argument of the distance and curvature statements, are in camera space or in raster space.

Tree and grid approximation algorithms are available for surface approximation. The grid algorithm tessellates on a regular grid of isolines in parameter space; the tree algorithm tessellates in a hierarchy of successive refinements that produces fewer triangles for the same quality criteria. By definition parametric approximations always use the grid algorithm; all others can use either but tree is the default. tree and grid have no effect on curve approximations.

Parametric approximation subdivides each patch of the surface into u_subdiv *degree equal-sized pieces in the U parameter direction, and v_subdiv *degree equal-sized pieces in the V parameter direction. If regular the number of subdivisions of the whole surface simply equals the parameter value. v_subdiv must be present for surface approximations and must be omitted for curve and trim approximations. Note that the factor is a floating point number, although a patch can only be subdivided an integral number of times. For example, if a factor of 2.0 is given and the surface is of degree three, each patch will be subdivided six times in each parametric direction. If a factor of 0.0 is given, each patch is approximated by two triangles.

Curves are subdivided in subdiv *degree equal pieces by the parametric approximation and into subdiv equal pieces by the regular parametric approximation.

For displacement mapped polygons and displacement mapped surfaces with a displace statement regular parametric has the same meaning as parametric in the approximation. For displacement mapped polygons the u_subdiv constant specifies that each edge in the triangulation of the original polygon is subdivided for the displacement 2u_subdiv times. If a displace approximation is given for a displacement mapped surface, the initial tessellation of the underlying geometric surface is subdivided in the same way as for polygons. For example, a value of 2 leads to a fourfold subdivision of each edge. Non-integer values for the subdivision constant are admissible. Nothing is done if the expression above is smaller than 2 (i.e.if u_subdiv < 1). The v_subdiv constant is ignored for the parametric approximation of displacement maps.

Length/distance/angle (LDA) approximation specifies curvature-dependent approximation according to the criteria specified by the length, distance, and angle statements. These statements can be given in any combination and order, but cannot be combined with parametric approximation in the same approximate statement.

The length statement subdivides the surface or curve such that no edge length of the tessellation exceeds the edge parameter. edge is given as a distance in camera space, or as a fraction of a pixel diagonal in raster space if the view keyword is present. Small values such as 1.0 are recommended. The min and max parameters, if present, specify the minimum and maximum number of recursion levels of the adaptive subdivision. The min parameter is a means to enforce a minimal triangulation fineness without any tests. Edges are further subdivided until they satisfy the given criterion is fulfilled or the max subdivision level is reached. The defaults are 0 and 5, respectively; 5 is a very high number. Good results can often be achieved with a maximum of 3 subdivisions.

For displacement mapped polygons and displacement mapped surfaces with a displace approximation statement the length criterion in the approximation limits the size of the edges of the displaced triangles and ensures that at least all features of this size are resolved. Subdivision stops as soon as an edge satisfies the criterion or when the maximum subdivision level is reached. The possibility that at an even finer scale new details may show up which would lead again to longer edges of course cannot be ruled out. This caveat about the potential miss of high-frequency detail applies also to the distance and angle criteria.

The distance statement specifies the maximum distance dist between the tessellation and the actual curve or surface. The value of dist is a distance in camera space, or a fraction of a pixel diagonal in raster space if the view statement is present. As a starting point, a small distance such as 0.1 is recommended. The min and max parameters, if present, specify the minimum and maximum number of recursion levels of the adaptive subdivision.

For displacement mapped polygons and displacement mapped surfaces with a displace approximation statement the distance criterion cannot be used in the same way because the displaced surface is not known analytically. Instead, the displacements of the vertices of a triangle in the tessellation are compared. The criterion is fulfilled only if they differ by less than the given threshold. Subdivision is finest in areas where the displacement changes. For example, if a black-and-white picture is used for the displacement map the triangulation will be finest along the borders between black and white areas but the resolution will be lower away from them in the uniformly colored areas. In such a case one could choose a moderately dense parametric surface approximation that samples the displacement map at sufficient density to catch small features, and use the curvature-dependent displace approximation to resolve the curvature introduced by the displacement map. Even if the base surface is triangulated without adding interior points, i.e.as if its trim curve defined a polygon in parameter space, it is still possible to guarantee a certain resolution by increasing the min subdivision level. Only the consecutive subdivisions are then performed adaptively.

The angle statement specifies the maximum angle angle in degrees between normals of adjacent tiles of a displaced polygon or the tessellation of a surface or its displacement or between tangents of adjacent segments of the curve approximation. Large angles such as 45.0 are recommended. The min and max parameters, if present, specify the minimum and maximum number of recursion levels of the adaptive subdivision.

Spatial approximation as specified by a spatial statement is a special case of an LDA approximation that specifies only the length statement. For backwards compatibility, the spatial statement has been retained; it is equivalent to the length statement plus an optional view statement.

Curvature-dependent approximation as specified by the curvature statement is also a special case of LDA approximation, equivalent to a distance statement, an angle statement, and an optional view statement. The spatial and curvature statements can be combined, but future designs should use length, distance, and angle directly.

If no approximation statement is given the parametric technique is used by default with u_subdiv = v_subdiv = 1 for surfaces, or u_subdiv = 1 in the case of curves and u_subdiv = 0 for polygons.

Connections

Connections may be defined within a group to specify the connection between two surfaces along intervals of their respective trimming curves or hole curves. They may be used in place of or in addition to the edge merging performed on the group level. A connection is defined as:

    connect "surface_name1" "curve_name1" min1 max1 
            "surface_name2" "curve_name2" min2 max2 
 

This statement connects two surfaces surface_name1 and surface_name2 by connecting their trimming curves curve_name1 and curve_name2. The curves are connected only in the range (min1 ...max1) and (min2 ...max2), respectively. They share the same points, but normals, textures etc. are evaluated on the individual surfaces. Only surfaces that have trimming curves can be connected by an explicit connect statement; for an example for a simple trimming curve that goes around the edge of a surface see the section on curves above. Trimming curves used in connections must satisfy three conditions:

  • The trimming curve or sequence of trimming curves must be closed.
  • It does not matter whether the trimming curve is oriented clockwise or counterclockwise, but if a sequence of trimming curves is used all pieces must have the same direction.

  • The trimming curves along the connected range must run in the same direction in 3D space.

The range values min1,2 and max1,2 must not exceed the range of the trimming curve segment as referenced by a trim statement of the corresponding surface. The minimum value must be less than the maximum value; it is not possible to satisfy the third condition by inverting the range.

Best results are obtained if the curves to be connected are close to each other in world space and have at least approximately the same length. connect is not meant to be a replacement for proper modeling. For carefully modeled surfaces it will not be necessary most of the time. Its purpose is to close small cracks between adjacent surfaces that are already not too far from each other. Topologically complex situations with several connections meeting in a point are beyond its scope.

Example

Here is an example of two surfaces that meet along one of their edges such that a gap remains. A connection is used to close the gap. The four control points defining the straight trimming curves that are connected are marked as #0, #1, #2, and #3; the control points of the second surface marked (*) have been modified slightly to create the gap.
This is a complete .mi file that can be rendered directly.


    verbose on
    $include <softimage.mi>

    options "opt"
        samples -1 1
        contrast .1 .1 .1 .1
        trace depth 2 2
    end options

    light "point" "soft_point" (
         "color"  1.0 1.0 1.0,
         "factor" 1.0)
         origin   140.189178 83.103180 50.617714
    end light

    instance "light_inst" "point" end instance

    camera "cam"
         output "pic" "x.pic"
         focal 50.000000
         aperture 44.724029
         aspect 1.179245
         resolution 500 424
         frame 1
    end camera

    instance "cam_inst" "cam" end instance

    material "mtl" opaque
        "soft_material" (
            "mode"          2,
            "shiny"         50.000000,
            "ambient"       0.500000 0.500000 0.500000,
            "diffuse"       0.700000 0.700000 0.700000,
            "specular"      1.000000 1.000000 1.000000,
            "ambience"      0.300000 0.300000 0.300000,
            "lights"        ["light_inst"])
    end material

    object "obj"
        visible shadow trace

        basis "bez1" bezier 1
        basis "bez3" bezier 3
        group "example"
            0.314772    -3.204608   -7.744229
            0.314772    -2.146943   -6.932366
            0.314772    -1.089277   -6.120503
            0.314772    -0.031611   -5.308641    #0
            -0.660089   -2.650739   -8.465791
            -0.660089   -1.593073   -7.653928
            -0.660089   -0.535407   -6.842065
            -0.660089   0.522259    -6.030203    #1
            -1.634951   -2.096869   -9.187352
            -1.634951   -1.039203   -8.375489
            -1.634951   0.018462    -7.563627
            -1.634951   1.076128    -6.751764    #2
            -2.609813   -1.543000   -9.908914
            -2.609813   -0.485334   -9.097052
            -2.609813   0.572332    -8.285189
            -2.609813   1.629998    -7.473326    #3

            0.000000    0.000000    -5.000000    #0 (*)
            1.224400    0.561979    -6.081950
            2.134028    1.155570    -6.855258
            3.043655    1.749160    -7.628566
            -0.500000   0.700000    -6.000000    #1 (*)
            0.249538    1.115849    -6.803511
            1.159166    1.709439    -7.576819
            2.068794    2.303029    -8.350128
            -1.200000   1.000000    -7.000000    #2 (*)
            -0.725323   1.669719    -7.525073
            0.184305    2.263309    -8.298381
            1.093932    2.856899    -9.071690
            -2.000000   2.000000    -7.500000    #3 (*)
            -1.700185   2.223588    -8.246634
            -0.790557   2.817178    -9.019943
            0.119071    3.410769    -9.793251

            0.0     0.0     0.0
            1.0     0.0     0.0
            1.0     1.0     0.0
            0.0     1.0     0.0

            v 0  v 1  v 2  v 3  v 4  v 5  v 6  v 7
            v 8  v 9  v 10 v 11 v 12 v 13 v 14 v 15

            v 16 v 17 v 18 v 19 v 20 v 21 v 22 v 23
            v 24 v 25 v 26 v 27 v 28 v 29 v 30 v 31

            v 32 v 33 v 34 v 35

            curve "curve1"
                 "bez1" 0.0 0.25 0.5 0.75 1.0
                 32 33 34 35 32

            curve "curve2"
                 "bez1" 0.0 0.25 0.5 0.75 1.0
                 32 35 34 33 32

            surface "patch1" "mtl"
                 "bez3" 0.0 1.0        0.0 1.0
                 "bez3" 0.0 1.0        0.0 1.0
                 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
                 trim "curve1" 0.0 1.0

            surface "patch2" "mtl"
                 "bez3" 0.0 1.0        0.0 1.0
                 "bez3" 0.0 1.0        0.0 1.0
                 16 17 18 19 20 21 22 23
                 24 25 26 27 28 29 30 31
                 trim "curve2" 0.0 1.0

            approximate surface parametric 1.0 1.0 "patch1"
            approximate surface parametric 1.0 1.0 "patch2"
            approximate trim    parametric 3.0     "patch1"
            approximate trim    parametric 3.0     "patch2"

            connect "patch1" "curve1" 0.25 0.5
                    "patch2" "curve2" 0.0  0.25
        end group
    end object

    instance "obj_inst"   "obj"   end instance

    instgroup "root"
        "light_inst" "cam_inst" "obj_inst"
    end instgroup

    render "root" "cam_inst" "opt"

Note that the trimming curves curve1 and curve2 have a different orientation, one clockwise and one counterclockwise, because their control point lists are in a different order. This means that where both trimming curves run in parallel, they run in the same direction in 3D space, which is a required condition for trimming curves to be connected. The trimming curves must be closed (another condition) and so run around all four edges of the (square) surfaces. Since only one edge of each surface is connected to the other, the connection ranges select only one quarter ( 0.5 ...0.25 and 0.25 ...0.0) of each curve.





The example produces the following image, once rendered without and then with the connect statement:

unconn_pic.ps.gif

conn_pic.ps.gif




Instances

    instance "name" 
        "entity"|geometry function
        [ hide              on|off ] 
        [ visible           on|off ] 
        [ shadow            on|off ] 
        [ trace             on|off ] 
        [ caustic           [ mode ]] 
        [ transform         [ matrix ]] 
        [ motion transform  [ matrix ]] 
        [ motion            off ] 
        [ material          "material_name" ] 
        [ material          [ "material_name" [ , "material_name" ... ] ] ] 
        [ (parameters) ] 
    end instance 
 

Instances place cameras, lights, objects, and instance groups into the scene. Without instances, these entities have no effect; they are not tessellated and are not scheduled for processing. An instance has a name that identifies the instance when it is placed into an instance group (see below). Every instance references exactly one entity entity, which must be the name of a camera, a light, an object, or an instance group. If the instanced item is a geometry shader function, the scene entity created by this special shader is actually used as the instanced item.

The hide flag can be set to on to disable the instance and the entity it references. This is useful to temporarily suspend an instance to evaluate a subset of the scene, without deleting and later recreating suspended parts. hide is off by default.

(see visible flag) (see shadow flag) (see trace flag) (see caustics) (see inheritance) The visible, shadow, trace and caustic modes are inherited down the scene DAG. Flags in instances lower (closer to the objects) override flags in instances higher up. The flags from the instance closest to the object are merged with the corresponding object flags. The resulting values become the effective flags for rendering. If no flags are specified in the relevant instances, only the object flags are used. For the exact definition of these flags refer to the Object section. The caustics mode bitmap contains four bits, and the desired behavior is the sum of 1 (to enable caustic casting), 2 (to enable caustic receiving), 4 (to disable caustic casting), and 8 (to disable caustic receiving). Obviously, 1 and 4, and 2 and 8, cannot be mixed, respectively. If mode is omitted, the default is 3 (enable casting and receiving).

The transform statement is followed by 16 numbers that define a 4 *4 matrix in row-major order. The matrix establishes the transformation from the parent coordinate space to the object space of the instanced entity. If the instance is directly attached to the root instance group (see below), the parent coordinate space is world space. For example, the following matrix translates the instanced entity to the coordinate (x, y, z):

    transform   1   0   0   0 
                0   1   0   0 
                0   0   1   0 
                x   y   z   1 
 

Instance transformations are ignored if the options entity explicitly sets the coordinate space to camera space, using the camera space statement. This is not recommended.

The motion transform matrix specifies a transformation from parent space to local space for motion blurred geometry. If not specified, the instance transformation is used for the motion blur transformation. In this case the parent instance determines whether motion blur is active or not. Motion blur is activated by specifying a motion transformation in the scene DAG, this transformation is propagated through the scene DAG in the same way as the instance transformations. The motion off statement turns off all motion information inherited up to this point, as if the camera and all instances above did not have motion transforms. This can be used to disable motion transformations for a scene subtree.

If a motion transformation is specified in an object instance, the triangle vertex points of the tesselated geometry are transformed by the matrix product of the accumulated instance matrix and the inverse accumulated motion transformation matrix. The difference vector between the transformed and the untransformed triangle vertex point is used as a motion vector in local object space. If an object has motion vectors attached to the vertices, the motion vector calculated as described above is added to the object motion vector. A motion transformation can be given for both object and camera instances. If a motion transformation is specified in a camera instance, the effective motion transformation for the triangle vertices is the matrix product of the relative instance and relative camera motion transformation.

The material_name is the name of a previously defined material. It is stored along with the instance. Instance materials are inherited down the scene DAG. Materials in instances lower (closer to the leaves) override materials in instances higher up. The material defined lowest becomes the default material for any polygon or surface in a geometrical object that has no material of its own.

If a bracketed, comma-separated list of material_names is given, mental ray will use the n-th material in the list if the polygon or surface label is n. If the label exceeds the length of the list, the first material in the list is used. Polygon and surface labels can be specified in the object definition that have the tagged (see tagged flag) flag set. If this flag is not set, the first material in the list is used. The list may not be empty.

An instance may define parameters. Instance parameters are evaluated during scene preprocessing during preprocessing. Whenever the initial scene traversal finds an instance, it calls the inheritance shader defined in the options entity with the parent instance parameters and the parameters of the new instance. The inheritance shader must then compute a new parameter set, which becomes the parent parameters for any future instances found in the entity subtree below the new instance, if entity is an instance group (if not, no sub-instances can exist and recursion ends). The inheritance shader is also called if there is no parent instance yet or if the new instance contains no parameters. The final parameter set created by the inheritance shader called for the bottom-level instance (which instances a camera, light, or object) is made available to shaders, in addition to the regular shader parameters.

The instance parameters must be declared just like shader parameters. The declare command must name the inheritance function, as specified in the options entity. All instances share the same declaration.

If transform, motion transform, and material are given without arguments, the respective feature is turned off. This is useful for incremental changes. It is not relevant for the initial definition because these features are off by default when an instance is created.

The entity may be named in more than one instance. This is called ``multiple instancing.'' If two instances name the same object, the object appears twice in the scene, even though it is stored only once in the scene database. This greatly reduces memory consumption. For example, it is sufficient to create one wheel object for a car, and then instance it four times. All four instances will contain a different transformation matrices to place the wheels in four different locations. (This implies that multiple instancing is not useful in camera space mode because in this mode the transformations are ignored.) It is also possible to apply multiple instancing to object groups to replicate entire sub-scenes.

If the instanced item is a ``geometry shader'', the function is called with shader parameters and the scene element created by the shader is defined in the local coordinate space of the instance. The geometry shader is called just before tessellation takes place. The following example uses a geometry shader mib_geo_sphere:

    instance "sphere" geometry
        "mib_geo_sphere" ()
    end instance

In this example a spherical object is procedurally created. This example uses the syntax for anonymous shaders; as usual the named shader syntax using the shader keyword and named shader assignments using the ``='' sign can also be used. Named shaders created inside or outside procedural object definitions are in global scope and can be shared with other objects.

For a complete example for building scene graphs with instances and instance groups, see below.


Instance Groups

    instgroup "name" 
        "name" 
        ... 
    end instgroup 
 

Every scene consists of more than one entity. There must be at least one camera and at least one object. In the simplest case, all cameras, lights, and objects can be collected into a single group, forming a ``flat scene'' because there is no hierarchy. Note that cameras, lights, and objects are never put into an instance group directly. Instead, an instance must be defined, one for each, and the instance is then put into the group. (This is why it is called an ``instance group.'')

Instance groups can be nested. An instance group is placed into a parent instance group exactly like a camera, light, or object: an instance must be defined for the child instance group, and the instance is put into the parent instance group. As with other entities, it is possible to create more than one instance for an instance group; this allows multiple instancing of sub-scenes. There is no limit on the nesting depth of instance groups.

Since the only purpose of instance groups is as a container for instances, the syntax is very simple. After the name of the instance group, one or more names of instances follow. An incremental change to an instance group clears the old instance list (without deleting the instances themselves); to add or remove an instance in an instance group, the incremental change must respecify the entire instance list.

The top-level instance group has no instance. It is called the root instance group. The root instance group stands for the entire scene. It is passed to the render command to process the scene. More than one root instance group can exist, but only one can be processed at a time. Camera instances must always be attached to the root instance group, not a lower-level instance group, and it may not be multiply instanced to ensure unambiguity. Multiple cameras can exist in the root instance group, but only one can be passed to the render command.

Contours

In order to get contours, it is necessary to link with the contour.so shader library, and a $include <contour.mi> statement is also needed. The file contour.mi contains declarations of the contour shaders in contour.so.

Also, the contour store function has to be specified in the options statement. A contour store function has no parameters and is specified as

     contour store "contour_store_function" ()

Where to Place Contours

A contour contrast function specifies where there should be a contour. Like the contour store function, the contour contrast function has to be specified in the options statement. The parameters of this function specify which difference in depth or surface orientation should cause a contour. For example, to get a contour where the difference in depth is more than 1.0 or the difference in surface normal is more than 60 degrees and between materials, the following contour contrast function is used

     contour contrast "contour_contrast_function_levels" (
        "zdelta"     1.0,
        "ndelta"     60.0, 
        "diff_mat"   on,
        "contrast"   on,
        "min_level"  1,
        "max_level"  1
     )

(Be aware that if zdelta or ndelta is set to a very small value, contours will be created also in large regions interior to objects.)

When diff_mat is on, contours are created between different materials. When contrast is on, contours are created where the contrast between colors is larger than the contrast specified in the options within the options statement. The parameters min_level and max_level tell which levels of reflection and layers of semitransparent materials should have contours on them. When both are set to 1, as here, only the outermost materials get contours and no reflections cause contours.

The hands in the figure shows the influence the parameters of the contour contrast function has on where contours are created. Top row (left to right): large zdelta and ndelta give only contours on the outline where the depth difference to the infinitely distant background is large; large ndelta and small zdelta give contours where there is even a small depth difference; small ndelta and zdelta gives contours where there is a small change in depth or orientation. Bottom row: Contours on deeper levels of materials seen through a semitransparent material; contours on reflections on a reflective material, for example the reflection of the thumb is visible in the index finger.

hand.simple.outline.ps.gif hand.simple.thin.ps.gif hand.simple.many.ps.gif

hand.simple.trans.ps.gif hand.simple.refl.ps.gif

Color and Width of Contours

The contour properties (color, width, etc.) depend on the object the contour is on and its material. For each material that should have a contour, one has to specify a contour shader. A material will not get a contour if it does not have a contour shader. The colors consist of four components: red, green, and blue color, and opacity. All four components of the color are normally between 0 and 1. The width is specified as a percentage of the minimum of image x resolution and y resolution. For example, if the image resolution is 700*500 and a contour width of 1.0 (percent) is specified, the thickness of the line becomes 5 pixels. The color, width, etc.can be parameters, or depend on curvature, distance, color, and illumination.

A material gets a simple contour of constant color and width if it has the contour_shader_simple contour shader. For example, the following specifies red contours that are half a percent wide:

     contour "contour_shader_simple" (
        "color"  1.0 0.0 0.0 1.0,   # solid red
        "width"  0.5                # in % of image resol
     )

As another example of a contour shader, contours of color and width that are linearly interpolated between two values, depending on distance to the camera, are specified with the contour shader contour_shader_depthfade. Two depths, colors, and widths are specified. If a contour point is more distant than far_z, the contour gets color far_color and width far_width. If a point is nearer than near_z, the contour gets color near_color and width near_width. If the depth is in between, the color and width are linearly interpolated. For example, to get contours that are interpolated between two percent wide red at depth -10 and half a percent wide blue at depth -25, specify

     contour "contour_shader_depthfade" (
        "near_z"     -10.0,             # from this depth,
        "near_color"  1.0 0.0 0.0 1.0,  # color (red),
        "near_width"  2.0,              # and width (in %)
        "far_z"      -25.0,             # to this depth,
        "far_color"   0.0 0.0 1.0 1.0,  # color (blue),
        "far_width"   0.5               # and width (in %)
     )

The left figure is a black-and-white illustration of this depthfade contour shader. The right figure is a scene with two materials with different contour type: illumination-dependent contours on the teapot and simple contours on the ``floor''.

fourcubes.depthfade.ps.gif teapotcontour.ps.gif

There are many other contour shaders in contour.so, and new ones can be written by the user.

Contour Output

After the regular image has been computed, a contour output shader can get the contour line segments and use them to for example render a contour image or write a file with contour information. The user can write contour output shaders using the built-in function mi_get_contour_line.

There are three contour output shaders in contour.so. They can generate a contour image, a contour image composited over the regular image, and a PostScript file with black contours. The output shader has to be specified in the camera.

To get a contour image called mycontourimage.pic in pic format, write

     output "contour,rgba" "contour_only" ()
     output "pic" "mycontourimage.pic"

To get an image called mycontourimage2.pic (in pic format) containing contours composited over the regular image, write

     output "contour,rgba" "contour_composite" ()
     output "pic" "mycontourimage2.pic"

The contour_composite output shader has two optional Boolean parameters: glow and maxcomp. The glow parameter makes all contours become darker and more transparent near their edges, creating a glow effect. maxcomp specifies that when a contour is over another contour, the maximum of the two colors (in each color band) should be used. If maxcomp is not specified (or set off), normal alpha compositing is used. The contour_only output shader also has the glow and maxcomp parameters, and in addition it has a background parameter which determines the background color (default is black).

To get a PostScript file called mycontourfile.ps with all contours in black, write

     output "contour,rgba" "contour_ps" (
        "paper_size"         4,
        "paper_scale"        1.0,
        "paper_transform_b"  0.0,
        "paper_transform_d"  1.0,
        "title"              on,
        "landscape"          on
     ) 
     output "ps" "contourimage.ps"

The PostScript file in this example gives A4 paper size with full scale. "paper_size" is an integer, with 0 indicating ``letter'' size, 1 indicating ``executive'', 2 indicating ``legal'', 3--6 indicating ``a3'', ``a4'', ``a5'', ``a6'', 7--9 indicating ``b4'', ``b5'', ``b6'', and 10 indicating ``11x17''. The parameter paper_scale scales the PostScript output. Furthermore, the Postscript coordinates are transformed according to the matrix (1 b 0 d ), where b and d are the parameters "paper_transform_b" and "paper_transform_d". This e.g.enables one to compensate for printers that print out with a slight skew. The Boolean title determines whether a title (consisting of file name and frame number) and a frame around the image are written. The Boolean landscape makes the output be in landscape mode rather than portrait mode.

It is also possible to get both the regular image (without contours) and one of the above at the same time. For example, to get the regular image and an image of the contours, write

     output "pic" "myimage.pic"
     output "contour,rgba" "contour_only" ()
     output "pic" "mycontourimage.pic"

Faster Contours

If only simple outlines of objects are needed, contour_shader_simple can be used with contour_store_function_simple and contour_contrast_function_simple to get fast contour computations. Furthermore, very simple material shaders should be used (no illumination, shadow, reflection, refraction, or texture computations).



Scene Example

This example creates two images of a cube, each with a different camera and light:

    $include <softimage.mi>

    options "opt"
        samples -1 1
        contrast .1 .1 .1 .1
        trace depth 2 2
    end options

    camera "cam1"
        frame 1
        output "pic" "x.pic"
        focal 100
        aperture 144.724029
        aspect 1.179245
        resolution 500 424
    end camera

    instance "caminst1" "cam1" end instance

    light "light1"
        "soft_point" (
            "color"  1 1 1,
            "factor" 1
        )
        origin   141.375732 83.116005 35.619434
    end light

    instance "lightinst1" "light1" end instance

    material "mtl" opaque
        "soft_material" (
            "mode"          2,
            "shiny"         50,
            "ambient"       .5 .5 .5,
            "diffuse"       .7 .7 .7,
            "specular"      1 1 1,
            "ambience"      .3 .3 .3,
            "lights" [ "lightinst1" ]
        )
    end material

    object "obj1"
        visible shadow trace
        group "mesh"
                -7.068787   -4.155799   -22.885710
                -0.179573   -7.973234   -16.724060
                -7.068787    4.344949   -17.619093
                -0.179573    0.527515   -11.457443
                 0.179573   -0.527514   -28.742058
                 7.068787   -4.344948   -22.580408
                 0.179573    7.973235   -23.475441
                 7.068787    4.155800   -17.313791

                v 0   v 1   v 2   v 3   v 4   v 5   v 6   v 7

                c "mtl"  0 1 3 2
                c        1 5 7 3
                c        5 4 6 7
                c        4 0 2 6
                c        4 5 1 0
                c        2 3 7 6
        end group
    end object

    instance "inst1" "obj1" end instance

    instgroup "world"
        "caminst1" "lightinst1" "inst1"
    end instgroup

    render "world" "caminst1" "opt"         # render frame 1

    incremental camera "cam1"
        frame 2
        aperture 100
    end camera

    incremental light "light1"
        "soft_point" (
            "color"  1 0 1,
        )
    end light

    render "world" "caminst1" "opt"         # render frame 2


Table of Contents