Using and Writing Shaders

All color computation in mental ray is based on shaders. There are various types of shaders for different situations, such as material shaders to evaluate the material properties of a surface, light shaders to evaluate the light-emitting properties of a light source, lens shaders to specify camera properties other than the default pinhole camera, and so on.

There are built-in shaders that support SOFTIMAGE, Wavefront, and Alias compatibility. Much of the power of mental ray relies on the possibility to write custom shaders and link them dynamically to mental ray at runtime to replace some or all of the built-in shaders. Custom shaders are written in C, using the full language and library support available in C.

Dynamic Linking of Shaders

Shaders are written as C subroutines, stored in files with the extension ``.c''. To use these shaders in a scene, they must be dynamically linked into mental ray at runtime. mental ray accepts shaders in three forms:

  • Directly as source code. The .mi file offers the code statement and the standalone renderer has a -code option. Both accept a source file with a .c extension.

  • In object format. Source files can be compiled to object code using Unix commands such as ``cc -O -c source.c'' (assuming that the shader source is in a file named source.c). The command may vary on some platforms, consult the manual (see `` man cc)''. This is the most common form on systems that do not support DSOs; it is faster than source because mental ray does not have to compile the shader. The compilation leaves a source.o file with the extension .o in the current directory. This file can be passed to mental ray using the link statement in the .mi file, or with the -link option of the standalone renderer.

  • In DSO format. DSO stands for Dynamic Shared Object. DSOs are supported on most platforms, such as SGI systems running IRIX 5.2 or higher. The operating system version can be found by running the Unix command ``uname -r''. DSOs have the extension .so on SGI systems and other extensions on other systems, but it is highly recommended to use .so on all systems. This makes it possible to use the same name on all systems regardless of type. mental ray automatically substitutes .dll for .so on Windows NT platforms.

To create a DSO, first compile the shader source to object format using cc as described above, then run the Unix command ``ld -shared source.o'', again assuming that the object file is called source.o. This ld command applies to SGI IRIX 5.2, refer to ``man ld'' on other platforms. Using DSOs is the fastest way to load shaders, there is very little overhead. DSOs, like object files, are loaded using link statements or -link options.

The commands to create a DSO depend on the operating system type. To create a DSO named shader.so from a source file shader.c, use the following commands. Insert the -g command line option after -c to insert debugging information, or insert -O to compile an optimized version. On most systems -g and -O cannot be combined. Refer to the compiler documentation for details.

SGI IRIX 5.x/6.x
cc -shared -o shader.so shader.c

HP/UX, IBM AIX
cc -c shader.c mv shader.o shader.so

Convex SPP-UX3.2
usr/convex/bin/cc -c -D PARALLEL shader.c mv shader.o shader.so (Do not use /bin/cc.)

Sun Solaris 2
cc -c shader.c ld -G -o shader.so shader.o -ldl (Using Sun CC; gcc can also be used.)

Digital Unix V3.2
cc -c shader.c ld -expect_unresolved '*' -shared -o shader.so shader.o -ldl

Windows NT
cl -c shader.c link /DLL /OUT:shader.dll shader.obj ray.lib (Using Visual C++ 4.x.)

Compiling and creating DSOs requires that a C development environment is installed on the system. If the cc and ld commands are not found, check if a development environment exists. On most platforms, it is a separate product that must be purchased separately. On SGI systems, the product name is IDO, which stands for IRIS Development Option. Dynamically loading a DSO does not require compilers or development options.

SGI machines search for DSO files whose name does not contain a ``/'' in all directories specified by the LD_LIBRARY_PATH environment variable. It contains a sequence of paths, separated by colons. If LD_LIBRARY_PATH is undefined, the directories /usr/lib, /lib, /lib/cmplrs/cc, and /usr/lib/cmplrs/cc are searched. If LD_LIBRARY_PATH is set by the user, it is very important to always include these default directories because otherwise the standard Unix libraries will no longer be found, and the shell will be unable to start virtually all utilities and applications. This can be fixed by setting LD_LIBRARY_PATH correctly, or by exiting the shell and starting another one. For example, if all .so files to be linked are in /tmp, the following command at a shell prompt before starting mental ray or the application containing mental ray will make mental ray search /tmp:


     setenv LD_LIBRARY_PATH /usr/lib:/lib:/lib/cmplrs/cc:
                                  /usr/lib/cmplrs/cc:/tmp
     LD_LIBRARY_PATH=/usr/lib:/lib:/lib/cmplrs/cc:
                                  /usr/lib/cmplrs/cc:/tmp
     export LD_LIBRARY_PATH

(Long lines are split here, but must be entered on a single line.)

     setenv LD_LIBRARY_PATH /usr/lib:/lib:/lib/cmplrs/cc:/usr/lib/cmplrs/cc:/tmp
     LD_LIBRARY_PATH=/usr/lib:/lib:/lib/cmplrs/cc:/usr/lib/cmplrs/cc:/tmp
     export LD_LIBRARY_PATH

The first line applies to the csh and tcsh shells; the other two lines apply to the sh and bash shells. Slaves started from the /usr/etc/inetd.conf file should be started with a matching LD_LIBRARY_PATH by using the following line as the last column of the respective /usr/etc/inetd.conf entry:

     env LD_LIBRARY_PATH=/usr/lib:...:/tmp /path/rayslave

It is recommended that .so files are given with a ``/'' in the name to avoid having to change LD_LIBRARY_PATH. For example, to link a shared object ``myshader.so'' in the current directory, give the name as ``./myshader.so''.

Note that source code (.c extension) is normally portable, unless nonportable system features (such as fsqrt on SGIs) are used. This means that the shader will run on all other vendors' systems unchanged. Both object files (.o extension) and DSOs (.so extension) do not have this advantage, they must be compiled separately for each platform and, usually, for each major operating system release. For example, a Hewlett-Packard object file will not run on an SGI system, and an SGI IRIX 4.x object file cannot be used on an IRIX 5.x system, and vice versa.

On SGI systems, a shader can be debugged after it has been called for the first time, which attaches it to the program and makes its symbols available for the debugger. It must have been compiled with the -g compiler option. On non-SGI systems, debugging shaders is, unfortunately, difficult. The reason is that most debuggers cannot deal with parts of a program that have been dynamically linked. In general, the debugger will refuse to set breakpoints in dynamically linked shaders, and will step over calls to these shaders as if they were a single operating system call. Some vendors are working on fixing these problems, but at this time the only option on non-SGI systems is using printf or mi_debug statements in the shader sources. Note that when using printf, you must include <stdio.h>, or mental ray will crash.

Coordinate Systems

Internal space is the coordinate system mental ray uses to present intersection points and other points and vectors to shaders. All points and vectors in the state except bump basis vectors (which are in object space) are presented in internal space, namely, org, dir, point, normal, and normal_geom. The actual meaning of internal space is left undefined, it varies between different versions of mental ray. A shader may not assume that internal space is identical to camera space, even though this was true in versions of mental ray prior to 1.9.

World space is the coordinate system in which modeling and animation takes place.

Object space is a coordinate system relative to the object's origin. The modeler that created the scene defines the object's origin; for example, the SOFTIMAGE translator uses the center of the bounding box of the object as the object origin.

Camera space is a coordinate system in which the camera is at the coordinate origin (0, 0, 0) with an up vector of (0, 1, 0) and points down the negative Z axis.

In addition to these 3D coordinate spaces, raster space is is a two-dimensional pixel location on the screen bounded by (0, 0) in the lower left corner of the image, and the rendered image resolution. The center of the pixel in the lower left corner of raster space has the coordinate (0.5, 0.5).

Screen space is defined such that (-1, -1/a) is in the lower left corner of the screen and (1, 1/a) is in the the upper right, where a is the aspect ratio of the screen.

Most shaders never need to transform between spaces. Texture shaders frequently need to operate in object space; for example, in order to apply bump basis vectors to state->normal, the normal must be transformed to object space before the bump basis vectors are applied, and back to internal space before the result is passed to any mental ray function such as mi_trace_reflect. mental ray offers twelve functions to convert points and vectors between coordinate spaces:

(see mi_point_to_world) (see mi_point_to_camera) (see mi_point_to_object) (see mi_point_from_world) (see mi_point_from_camera) (see mi_point_from_object) (see mi_vector_to_world) (see mi_vector_to_camera) (see mi_vector_to_object) (see mi_vector_from_world) (see mi_vector_from_camera) (see mi_vector_from_object)

 function                   operation 
 
mi_point_to_world(s,pr,p) internal point to world space mi_point_to_camera(s,pr,p) internal point to camera space mi_point_to_object(s,pr,p) internal point to object space mi_point_from_world(s,pr,p) world point to internal space mi_point_from_camera(s,pr,p) camera point to internal space mi_point_from_object(s,pr,p) object point to internal space mi_vector_to_world(s,vr,v) internal vector to world space mi_vector_to_camera(s,vr,v) internal vector to camera space mi_vector_to_object(s,vr,v) internal vector to object space mi_vector_from_world(s,vr,v) world vector to internal space mi_vector_from_camera(s,vr,v) camera vector to internal space mi_vector_from_object(s,vr,v) object vector to internal space

Point and vector transformations are similar, except that the vector versions ignore the translation part of the matrix. The length of vectors is preserved only if the transformation matrix does not scale. The mi_point_transform and mi_vector_transform functions are also available to transform points and vectors between arbitrary coordinate systems given by a transformation matrix.

Shader Type Overview

There are five types of shaders, all of which can be substituted by user-written shaders:

  • material shaders describe the visible material of an object. They are the only mandatory part of any material description. Material shaders are called whenever a visible ray (eye ray, reflected ray, refracted ray, or transparency ray) hits an object. Material shaders have a central function in mental ray.

  • volume shaders are called to account for atmospheric effects encountered by a ray. The state (see below) distinguishes two types of volume shaders: the standard volume shader that is called in most cases, and the refraction volume shader that is taken from the object material at the current intersection point, and becomes the standard volume shader if a refraction or transparency ray is cast. Many material shaders substitute a new standard volume shader based on inside/outside calculations. Volume shaders, unlike other shaders, accept an input color (such as the one calculated by the material shader at the last intersection point) that they are expected to modify.

  • light shaders implement the characteristics of a light source. For example, a spot light shader would use the illumination direction to attenuate the amount of light emitted. A light shader is called whenever a material shader uses a built-in function to evaluate a light. Light shaders normally cast shadow rays if shadows are enabled to detect obscuring objects between the light source and the illuminated point.

  • shadow shaders are called instead of material shaders when a shadow ray intersects with an object. Shadow rays are cast by light sources to determine visibility of an illuminated object. Shadow shaders are basically light-weight material shaders that calculate the transmitted color of an object without casting secondary or shadow rays.

  • environment shaders are called instead of a material shader when a visible ray leaves the scene entirely without intersecting an object. Typical environment shaders evaluate a texture mapped on a virtual infinite sphere enclosing the scene (virtual because it is not part of the scene geometry).

  • texture shaders come in three flavors: color, scalar, and vector. Each calculates and returns the respective type. Typical texture shaders return a color from a texture image after some appropriate coordinate transformation, or compute a color at a location in 3D space using some sort of noise function. Their main purpose is to relieve other shaders, such as material or environment shaders, from performing color and other computations. For example, if a marble surface were needed, it should be written as a texture shader and not a material shader because a texture shader does not have to calculate illumination by light sources, reflections, and so on. It is much easier to write a texture shader than a material shader. mental ray never calls a texture shader directly, it is always called from one of the other types of shaders.

  • lens shaders are called when a primary ray is cast by the camera. They may modify the eye ray's origin and direction to implement cameras other than the standard pinhole camera, and may modify the result of the primary ray to implement effects such as lens flares.

  • output shaders are different from all other shaders and receive different parameters. They are called when the entire scene has been completely rendered and the output image resides in memory. Output shaders operate on the output image to implement special filtering or compositing operations. Output shaders are not associated with any particular ray because they are called after the last ray is completed.


The following diagram illustrates the path of a ray cast by the camera. It first intersects with a sphere at point A. The sphere's material shader first casts a reflection ray that hits a box, then a refraction ray that intersects the sphere at its other side T, and finally it casts a transparency ray that also intersects the sphere, at D. (This example is contrived, it is very unusual for a material shader to cast both a refraction and a transparency ray.) The same material shader is called at points A, T, and D. In this example, the reflection trace depth may have prevented further reflection rays to be cast at T and D.


volume_rays.fig.ps.gif

The annotations set in italics are numbered; the events described happen in the sequence given by the numbers.

Since material shaders may do inside/outside calculations based on the surface normal or the parent state chain (see below), the volume shaders are marked (1) and (2), depending on whether the volume shader left by A or by T/D in the refraction volume field of the state. The default refraction volume shader is the one found in the material definition, or the standard volume shader if the material defines no volume shader. For details on choosing volume shaders, see the section on writing material and volume shaders. Note that the volume shaders in this diagram are called immediately after the material shader returns.


The next diagram depicts the situation when the material shader at the intersection point M requests a light ray from the light source at L, by calling a function such as mi_sample_light. This results in the light shader of L to be called. No intersection testing is done at this point. Intersection testing takes place when shadows are enabled and the light shader casts shadow rays (see shadow ray) from the light source L to the illuminated point M. For each obscuring object (A and B), a shadow ray is generated with the origin L and the intersection point A or B, and the shadow shaders of objects A and B are called to modify the light emitted by the light source based on the transparency attributes of the obscuring object. Note that no shadow ray is generated for the segment from B to M because no other obscuring object whose shadow shader could be called exists. Note also that although shadow rays always go from the light source towards the illuminated point, the order in which the shadow shaders are called is not defined unless the shadow_sort option is set in the view. Here, steps 4 and 5 may be reversed if it is not. Two shadow rays are cast, even though the light shader has called trace_shadow only once.


shadow_rays.fig.ps.gif

The remainder of this chapter describes how to write all types of shaders. First, the concepts of ray tracing state parameter passing common to all shaders are presented, followed by a detailed discussion of each type of shader.

State Variables

Every shader needs to access information about the current state of mental ray, and information about the intersection that led to the shader call. This information is stored in a single structure known as the state. Not all information in the state is of interest or defined for all shaders; for example, lens shaders are called before an intersection is done and hence have no information such as the intersection point or the normal there. It is recommended to call the state parameter that shaders receive as a formal parameter state because some macros provided in the mi_shader.h include file that require access to the state rely on this name. The state, and everything else needed to write shaders, is defined in mi_shader.h, which must be included by all shader source files. Note that the state parameter of a shader should have the name state because several convenience macros depend on this.

Before a shader is called, mental ray prepares a new state structure that provides global information to the shader. This state may be the same data structure that was used in the previous call (this is the case for shaders that modify another shader's result, like lens, shadow, and volume shaders); or it may be a new state structure that is a copy of the calling shader's state with some state variables changed (this is done if a secondary ray is cast with one of the tracing functions provided by mental ray). For example, if a material shader that is using state A casts a reflected ray, which hits another object and causes that object's material shader to be called with state B, state B will be a copy of state A except for the ray and intersection information, which will be different in states A and B. State A is said to be the parent of state B. The state contains a parent pointer that allows sub-shader to access the state of parent shaders. If a volume shader is called after the material shader, the volume shader modifies the color calculated by the material shader, and gets the same state as the material shader, instead of a fresh copy.

This means that it is possible to pass information from one shader to another in the call tree for a primary ray, by one of two methods: either the parent (the caller) changes its own state that will be inherited by the child, or the child follows the parent pointer. The state contains a user pointer that a parent can store the address of a local data structure in, for passing it to sub-shaders. Since every sub-shader inherits this pointer, it may access information provided by its parent. A typical application of this are inside/outside calculations performed by material shaders to find out whether the ray is inside a closed object to base the interpretation of parameters such as the index of refraction on.

Note that the state can be used to pass information from one shader to sub-shaders that are lower in the call tree. Care must be taken not to destroy information in the state because some shaders (shadow, volume, and the first eye shader) re-use the state from the previous call. In particular, the state cannot be used to pass information from one primary (camera) ray to the next. Static variables can be used in the shader for this purpose, but care must be taken to avoid multiple access on multiprocessor shared-memory machines. On such a machine, all processors share the same set of static variables, and every change by one processor becomes immediately visible to all other processors, which may be executing the same shader at the same time. Locking facilities are available in mental ray to protect critical sections that may execute only once at any time.

Here is a complete list of state variables usable by shaders. Variables not listed here are for internal use only and should not be accessed or modified by shaders. The first table lists all state variables that remain unchanged for the duration of the frame:

Frame

 type               name                content 
 
int version shader interface version miTag camera_inst tag of camera instance miCamera * camera camera information miRc_options * options general rendering options

The camera data structure pointed to by camera has the following fields. None of these may be written to by a shader.

 type               name                content 
 
miBoolean orthographic orthographic rendering miScalar focal focal length of the camera miScalar aperture aperture of the camera miScalar aspect aspect ratio y /x miRange clip Z clipping distances int x_resolution image width in pixels int y_resolution image height in pixels int window.xl left image margin int window.yl bottom image margin int window.xh right image margin int window.yh top image margin miTag volume view volume (atmosphere) miTag environment view environment shader int frame frame number float frame_time frame time in seconds

The option data structure pointed to by option has this format. The option structure may also not be written to by shaders.

 type               name                content 
 
miBoolean shadow shadow casting turned on? miBoolean trace ray tracing turned on? miBoolean scanline scanline mode turned on? miBoolean shadow_sort sort shadow shader calls? miBoolean contour contours turned on? miBoolean motion motion blur turned on? enum miRc_sampling sampling image sampling mode enum miRc_filter filter nonlocal sampling filter
enum miRc_acceleration acceleration enum miRc_acceleration acceleration ray tracing algorithm enum miRc_face face front, back, or both faces enum miRc_field field odd, even, or both fields int reflection_depth max reflection trace depth int refraction_depth max refraction trace depth int trace_depth max combined trace depth

version
The version number of the interface and the state structure. Useful to check if the version is compatible with the shader. Version 1 stands for mental ray 1.8, version 2 stands for 1.9.

camera_inst
The camera instance is for internal use by mental ray only.

The following parameters are in the state->camera structure:

orthographic
This flag is miTRUE if the renderer is in orthographic mode, and miFALSE if it is in perspective mode.

focal
The focal length of the camera (the distance from the origin in camera space to the viewing plane that the image pixels are mapped on).

aperture
The aperture of the camera (the width of the viewing plane in camera space).

aspect
The aspect ratio (the ratio of the width and height of the viewing plane in camera space).

clip
This data structure has two members, min and max, that specify the hither and yon clipping planes in camera space. Objects will be clipped if their Z coordinate in camera space is less than -max or greater than -min.

x_resolution
The x resolution of the image in pixels.

y_resolution
The y resolution of the image in pixels.

window
The window specifies the lower left and the upper right pixel of the sub-region of the image to be rendered. If xl and yl are 0 and xh and yh match or exceed the resolution minus one, the entire image is rendered. The window is clipped to the resolution. Pixels outside the window are set to black.

volume
(see volume shader) The global volume (atmosphere) shader of the view that is used for attenuating rays outside of objects, such as the primary ray from the camera to the first object intersection. Material shaders inherit this volume shader because the volume state variable defaults to the view volume, but shaders may override the volume. See below.

environment
(see environment shader) The environment (reflection map) shader of the view. It is used to assign a color to primary eye rays that leave the scene without ever intersecting an object. Material shaders that do not define their own environment shaders for evaluation of local reflection maps inherit the view environment shader. Reflection maps give the illusion of true raytraced reflections by looking up a texture based on the reflection direction.

frame
The current frame number. In field mode, this is the field number, two successive frames rendered by mental ray are combined into a single output frame by an output shader. In field mode, the odd frame is the first frame and the even frame is the second.

frame_time
The current frame number, expressed as a time in seconds. The relation between frame and frame_time depends on the frame rate. Both numbers are taken verbatim from the input scene, mental ray does not change or verify either number. If the frame time is omitted in the .mi file, it is set to 0.0.

The following parameters are in the state->options structure:

shadow
If this flag is miTRUE, shadow casting is enabled. If it is miFALSE, no shadows are rendered regardless of the shadow flags of individual objects, and no shadow shaders will be called.

trace
If this flag is miTRUE, secondary ray tracing (reflection and refraction) is enabled. If it is miFALSE, only eye and transparency rays are evaluated, and no shadows are computed regardless of the shadow flag.

scanline
If this flag is miTRUE, the scanline algorithm is enabled for primary rays. In this case, lens shaders may not change the ray origin and direction, and motion blurring may not be used. If scanline is miFALSE, primary rays are cast using pure ray tracing, which may slow down the rendering process.

shadow_sort
If this flag is miTRUE, shadow shaders are called in order, the one closest to the light source first. If it is off, shadow shaders are called in random order. This flag is here to allow shadow shaders that depend on the correct order to abort if the flag is miFALSE.

contour
If this flag is miTRUE, contour rendering is enabled; if it is miFALSE, contour rendering is disabled. Contours and color images cannot be rendered at the same time.

motion
If this flag is miTRUE, motion blurring is enabled; if it is miFALSE, motion blurring is disabled even if motion vectors are present.

sampling
Sampling specifies the supersampling algorithm. Three constants are defined: miRC_SAMPLING_RECURSIVE is the default recursive supersampling algorithm that generally results in the highest quality; miRC_SAMPLING_CONSTANT takes a constant number of samples per pixels, and miRC_SAMPLING_ADAPTIVE is a nonrecursive adaptive algorithm.

filter
Nonlocal filtering weighs samples according to their distance from the pixel center. Possible values are miRC_FILTER_BOX, miRC_FILTER_TRIANGLE, and miRC_FILTER_GAUSS.

acceleration
The ray tracing algorithm. This is either miRC_ACCEL_RAY_CLASSIFICATION (an algorithm based on ray classification), or miRC_ACCEL_SPACE_SUBDIVISION (which is a BSP algorithm). The latter is often, but by no means always, faster. Note that by default, primary rays are computed using a scanline algorithm if possible.

face
This variable specifies whether front-facing, back-facing, or both triangles are taken into account. All others are ignored, resulting in speed improvements. This is also called back face culling. The possible values are miRC_FACE_FRONT, miRC_FACE_BACK, and miRC_FACE_BOTH.

field
(see field rendering) Field rendering, if turned on, renders only the even or odd scanlines of an image. Two successive renders are then combined to a full frame, resulting in smoother animations. miRC_FIELD_OFF turns off field rendering, miRC_FIELD_EVEN renders only even scanlines (top is odd), and miRC_FIELD_ODD renders only odd scanlines.

reflection_depth
The maximal allowed number of recursive reflections. A reflection ray will only by cast when this limit is not exceeded. If set to 0, (see secondary ray) no secondary reflection rays will be cast. See reflection_level below.

refraction_depth
The maximal allowed number of recursive refractions. A refraction ray will only be cast if this number is not exceeded. If set to 0, (see secondary ray) no secondary refraction or transparency rays will be cast. See refraction_level below.

trace_depth
The maximal summed trace depth. mental ray will allow this many segments of the ray tree when it is followed through the scene, with any combination of reflections and refractions permitted by the previous two values until the total trace depth is reached. A ray will only be cast if this number is not exceeded.

Image Samples

The state variables in the next table describe an eye (primary) ray. There is one eye ray for every sample that contributes to a pixel in the output image. If a material shader that evaluates a material hit by a primary ray casts secondary reflection, refraction, transparency, light, or shadow rays, all shaders called as a consequence will inherit these variables unmodified:

 type               name                content 
 
miScalar raster_x X coordinate of image pixel miScalar raster_y Y coordinate of image pixel struct miFunction shader current shader miLock global_lock lock shared by all shaders (see locking)

raster_x
The X coordinate of the point in raster space to be rendered. Raster space is the pixel grid of the rendered image file, with 0/0 at the lower left corner.

raster_y
The Y coordinate of the point in raster space to be rendered.

shader
This pointer points to a data structure describing the current shader. The fields usable by shaders are lock, which is a lock shared by all calls to this shader (see locking) , miTag next_function for chained shaders such as lens shaders, and char parameters[] which contains the shader parameters. The latter is redundant for the current shader because the parameter pointer is also passed as the third shader argument, but it can be used to find the parameters of parent shaders. This should be used with care because the C data structure of parent shader parameter lists is not generally known.

global_lock
This lock is shared by all shaders, regardless of their name. It can be used to lock critical sections common to all shaders. For example, it could be used to protect a nonreentrant user-defined random-number generator, or initialization of a more specific set of locks. (see locking)

Rays

Whenever a ray is cast the following state variables are set to describe the ray:

 type               name                content 
 
miState * parent state of parent shader int type type of ray: reflect, light... miBoolean contour set in contour-line mode miBoolean scanline from scanline algorithm void * cache RC intersection cache miVector org start point of the ray miVector dir direction of the ray miScalar time shutter interval time miTag volume volume shader of primitive miTag environment environment shader int reflection_level current reflection ray depth int refraction_level current refraction ray depth

parent
(see shader call tree) Points to the state of the parent ray. In the first lens shader, it is NULL. In subsequent lens shaders, it points to the previous lens shader. In the material shader that is called when the primary ray hits an object, it points to the last lens shader's state, or is NULL if no lens shader has been applied. For material shaders called when a secondary reflection or refraction ray hits an object, it points to the parent material shader that cast the ray. In light shaders and environment shaders, it points to the state of the shader which requested the light or environment lookup. For shadow shaders, it points to the state of the light shader that started the shadow trace. In volume shaders, its value is the same as for a material or light shader of the same ray.

type
Specifies the reason for this ray. This is an enumerator: miRAY_EYE rays are primary rays; miRAY_TRANSPARENT, miRAY_REFRACT and miRAY_REFLECT rays are cast by material shaders to determine transparency, refractions, and reflections, respectively, miRAY_SHADOW rays are light rays cast from a light source, miRAY_LIGHT rays are cast when a light source is evaluated (which may result in the light source casting shadow rays back), miRAY_ENVIRONMENT rays sample an environment (reflection) map, and miRAY_NONE is a catch-all for other types of rays.

contour
This flag is set if mental ray is in contour rendering mode. In this mode, shaders can often simplify calculations, such as computing only the diffuse color.

scanline
This flag is set if the current intersection was found by means of the scanline algorithm.

cache
This variable is used by the renderer internally to improve speed. Its existence determines which shaders may call which tracing functions. By setting this pointer to 0, the shader can ease these restrictions and call tracing functions that are not normally legal for this type of shader. For details, see the section Shaders and Trace Functions below.

org
The origin (start point) of the ray in internal space. In the primary material shader, it is set by the last lens shader, or is (0, 0, 0) if there is no lens shader and the default non-orthogonal pinhole camera is used. In all other material shaders, it is the previous intersection point. In light and shadow shaders, it is the origin of the light source. If the light source does not have an origin, it is undefined.

dir
The direction of the ray in internal space. This is basically the normalized difference between point and org, pointing towards the current intersection point (except in the case of directional light sources that have no origin; in this case the light direction is used). Light and shadow rays point from the light source to the intersection.

time
The time of the ray in the shutter interval. If motion blurring is turned off, the time is always 0. Otherwise, it is systematically sampled in the range from 0 to the shutter time.

volume
(see volume shader) The volume shader to be applied to the ray. The volume shader is called immediately after the material shader returns, without any change to the state. The volume shader changes the result color of the material shader to take the distance the ray to the (see primary ray) (see secondary ray) material has traveled into account. For primary rays this is the volume of the view, for reflection and light rays the volume of the parent, and for refraction and transparency rays the refraction_volume of the parent. Note that the (see mi_trace_refraction) (see mi_trace_transparent) mi_trace_refract and mi_trace_transparent functions copy refraction_volume to volume to ensure that sub-shaders (see shader call tree) use the volume shader that applies to the interior of the object. Volume shaders are also applied to light rays.

environment
(see environment shader) The active environment shader. For primary rays this is the environment of the view. For reflection and refraction rays the active shader is taken from the material, if it specifies an environment shader. If it does not, the environment shader defaults to the environment shader in the view, if present. The purpose of the environment shader is to provide a color if a ray leaves the scene entirely.

reflection_level
The reflection level of the ray. It may range from 0 for primary rays to reflection_depth minus one. The trace_depth imposes another limit: The sum of reflection_level and refraction_level must be one less than trace_depth. A shader may decrement or reset the reflection level to circumvent the restrictions imposed by the trace depth state variables.

refraction_level
The refraction level of the ray. This is equivalent to the reflection level but applies to refraction and transparency (which is a variation of refraction that does not take the index of refraction into account) rays. A shader may decrement or reset the reflection level to circumvent the restrictions imposed by the trace depth state variables.


Intersection

The variables in the next table are closely related to the previous. They describe the intersection of the ray with an object, and give information about that object and how it was hit.

 type               name                content 
 
miTag refraction_volume volume shader for refraction unsigned int label object label for label file miTag instance instance of object miTag light_instance instance of light miScalar [4] bary barycentric coordinates miVector point intersection (ray end) point miVector normal interpolated normal at point miVector normal_geom geometry normal at point miBoolean inv_normal true if normals were inverted miScalar dot_nd dot prod of normal and dir miScalar dist length of the ray int pri indentifies hit primitive miScalar shadow_tol safe zone against self-shadows miScalar ior index of refraction of medium miScalar ior_in index of refraction of previous medium

refraction_volume
(see volume shader) The volume at the other side of the object. This is set to the volume shader of the material. It will be applied to refraction rays. This is implemented by copying refraction_volume to volume (which is the shader that gets called when the material shader returns) in the mi_trace_refract and mi_trace_transparent functions. The material shader may decide that the ray is leaving and not entering the object, and look in the state's parents for an outside volume shader, or simply copy view_volume to refraction_volume to make the ray resume in the outside atmosphere.

label
The label of the hit object. Every object may contain a label that is made available by this variable. When the primary material shader returns, the label is copied from the state to the ``tag'' frame buffer, if one was created by an appropriate output statement in the view. The primary material shader is free to change this variable to any value to put different values into the tag frame buffer.

instance
The instance of the object containing the primitive hit by the ray.

light_instance
If the ray is a light ray or the corresponding (see shadow ray) shadow ray: the light instance.

bary
The three barycentric coordinates of the intersection in the hit primitive. The fourth value is reserved for implicit patches and is not currently used. Barycentric coordinates are weights that specify the contribution by each vertex or control point. The sum of all barycentric coordinates is 1.

point
The intersection point (end point of the ray) in internal space.

normal
The (interpolated) surface normal at the intersection point if vertex normals are present, or the uninterpolated geometric normal of the primitive otherwise, in internal space. It points to the side of the primitive from which it is hit by the ray, and is normalized to within the precision of a float. Care should be taken when calculating the length of the normal; the result of this calculation might be very slightly greater than 1 because a float has only a little over six significant digits. This can cause math functions like acos to return NaN (Not a Number, an illegal result), which usually results in white pixels in the output if the NaN finds its way into a result color.

normal_geom
The uninterpolated normal of the hit primitive in internal space. It points to that side of the primitive from which it is hit by the ray. It is normalized.

inv_normal
If a ray hits geometry from behind (that is, if the dot product of the ray direction and the normal is positive), mental ray inverts both normal and normal_geom and sets inv_normal to miTRUE.

dot_nd
The negative dot product of the normal and the direction (after the normals have been inverted in the case of backfacing geometry). In the case of light rays, it is the negative dot product of the light ray direction and the normal at the point which is to be illuminated.

dist
The length of the ray, which is the distance from org to point if there is an origin, in internal space. A value lower or equal to 0.0 indicates that no intersection was found, and that the length is infinite.

material
The material of the primitive hit by the ray. The data can be accessed using mi_db_access followed by mi_db_unpin. This is generally not necessary because all relevant information is also available in the state.

pri
This number uniquely identifies the primitive in the current box. All primitives are grouped in boxes. When casting light rays, mental ray may check whether the primitive's normal is pointing away from the light and ignore the light in this case; for this reason some shaders, such as ray-marching volume shaders, assign 0 to pri before sampling a light. Shaders other than volume shaders should restore pri before returning.

shadow_tol
If a shadow ray is found to intersect the primitive in shadow, at a distance of less than this tolerance, the intersection is ignored. This prevents self-shadowing in the case of numerical inaccuracies.

ior
This field is intended for use by material shaders that need to keep track of the index of refraction for inside/outside determination. It is not used by mental ray. The mi_mtl_refraction_index shader interface function stores the index of refraction of the medium the ray travels through in this state variable.

ior_in
Like ior, this field helps the shader with inside/outside calculations. It contains the ``previous'' index of refraction, in the medium the ray was travelling through before the intersection. Like ior, ior_in is neither set nor used by mental ray, it exists to allow the material shader to inherit the previous index of refraction to subsequent shaders.

Textures, Motion, Derivatives

(see texture) The following table is an extension to the previous. These variables give information about the intersection point for texture mapping. They are defined when the ray has hit a textured object:

 type               name                content 
 
miVector * tex_list list of texture coordinates miVector * bump_x_list list of X bump basis vectors miVector * bump_y_list list of Y bump basis vectors miVector tex texture coord (tex shaders) miVector motion interpolated motion vector miVector u_deriv Surface U derivative miVector v_deriv Surface V derivative

tex_list
A pointer to an array containing the texture coordinates of the intersection point in all texture spaces. When material shaders that support multiple textures on a surface call a texture lookup function, they must specify which texture coordinate to use, usually by choosing one of the texture vertices in tex_list and copying it to tex.

bump_x_list
A pointer to an array containing the x bump basis vectors in all texture spaces. The vectors are in object space.

bump_y_list
A pointer to an array containing the y bump basis vectors in all texture spaces. The vectors are in object space.

tex
The texture coordinates where the texture should be evaluated. This variable is available only in texture shaders; it is set by the caller (for example, a texture lookup from a material shader).

motion
The motion vector of the intersection point. If the object has no motion vectors (m statements in the vertices in the scene description file), the motion vector is a null vector.

u_deriv
If the object is a free-form surface object, the surface U derivative vector. For polygonal objects, the derivative vectors are null vectors.

v_deriv
If the object is a free-form surface object, the surface V derivative vector. The U and V derivative vectors and the normal always form right angles with one another.In version 1.9 of mental ray, surface derivative vectors are not supported, u_deriv and v_deriv are always null vectors.

User Fields

Finally, the user field allows a shader to store a pointer to an arbitrary data structure in the shader. Subsequent shaders called as a result of operations done by this shader (such as casting a reflection ray or evaluating a light or a texture) inherit the pointer and can read and write this shader's local data. Sub-shaders can also find other parent's user data by following state parent pointers, see above. With this method, extra parameters can be provided to and extra return values received from sub-shaders. The user variables are initialized to 0.

 type               name                content 
 
void * user user data pointer int user_size user data size (optional) miFunction * shader shader data structure

The shader pointer can be used to access the shader parameters, as state->parameters. This is redundant for the current shader because this pointer is also passed as the third shader argument, but it can be used to find a parent shader's parameters. For example, the SOFTIMAGE material shader uses this to perform inside/outside calculations.

User Parameter Declarations

In addition to the state variables that are provided by mental ray and are shared by all shaders, every shader has user parameters. In the .mi scene file, shader references look much like a function call: the shader name is given along with a list of parameters. Every shader call may have a different list of parameters. mental ray does not restrict or predefine the number and types of user parameters, any kind of information may be passed to the shader. Typical examples for user parameters are ambient, diffuse, and specular colors for material shaders, attenuation parameters for light shaders, and so on. An empty parameter list in a shader call (as opposed to a shader declaration) has a special meaning; see the note at the end of this chapter.

In this manual, the term ``parameters'' refers to shader parameters in the .mi scene file; the term ``arguments'' is used for arguments to C functions.

Shaders need both state variables and user parameters. Generally, variables that are computed by mental ray, or whose interpretation is otherwise known to mental ray, and that are useful to different types or instances of shaders are found in state variables. Variables that are specific to a shader, and that may change for each instance of the shader, are found in user variables. mental ray does not access or compute user variables in any way, it merely passes them from the .mi file to the shader when it is invoked.

To interpret these parameters in the .mi file, mental ray needs a declaration of parameter names and types that is equivalent to the C struct that the shader later uses to access the parameters. The declaration in the .mi file must be exactly equivalent to the C struct, or the shader will mis-interpret the parameter data structure constructed by mental ray. This means that three parts are needed to write a shader: the C source of the shader, the C parameter struct, and the .mi declaration. The latter is normally stored in a separate file that is included into the .mi scene file using a $include statement.

Every .mi declaration has the following form:

    declare "shadername" ( 
        type "parametername", 
        type "parametername", 
        ... 
        type "parametername" 
    ) 
 

It is recommended that shadername and parametername are enclosed in double quotes to disambiguate them from reserved keywords and to allow special characters such as punctuation marks.

The declaration gives the name of the shader to declare, which is the name of the C function and the name used when the shader is called, followed by a list of parameters with types. Names are normally quoted to disambiguate them from keywords reserved by mental ray. Commas separate parameter declarations. The following types are supported:

boolean
A boolean is either true or false. Possible values are on, off, true, and false.

integer
Integers are in the range -2^31 ...2^31-1.

scalar
Scalars are floating-point numbers, defined as an optional minus sign, a sequence of digits containing a decimal point at any place, followed by an optional decimal exponent. A decimal exponent is the letter e followed by a positive or negative integer. Examples are 1.0, .5, 2., 1.e4, or -2.3e-6 .

vector
A vector is a sequence of three scalars as defined above, describing the x, y, and z components of the vector.

transform
A transformation is a 4 *4 matrix of scalars, with the translation in the last row. The data structure consists of an array of 16 miScalars in row-major order.

color
A color is a sequence of three or four scalars as defined above, describing the red, blue, green, and alpha components of the color, in this order. If alpha is omitted, it defaults to 0.0.

color texture
Color textures name a texture as defined by a color texture statement in the .mi file. That color texture statement names either a texture file, or a texture shader followed by a user parameter list. Note that a color texture does not name the texture shader directly. When a color texture is evaluated, it returns an RGBA color.

scalar texture
Scalar textures are equivalent to color textures, except that they name a scalar texture statement in the .mi file. When a scalar texture is evaluated, it returns a scalar (a single floating-point number). This is most often used to apply a texture map to a scalar material parameter such as transparency.

vector texture
Vector textures are another variation; they name a vector texture statement in the .mi file, which returns a vector when evaluated. Bump maps on materials are typical applications for vector textures.Since mental ray regards all types of textures as generic shading functions, the distinction between different texture types may disappear in future versions.
light
Lights specify a light as defined by a light statement in the .mi file, which names a light. Like textures, light parameters do not name light shaders directly.

struct { ... }
Structures define a sub-list of parameters. This is normally used to build arrays of structures, for example to declare an array of textures, each of which comes with a blending factor. The ellipsis ... stands for another comma-separated sequence of type/parametername pairs.

array type
Arrays are different from all other types in that they are not named. The array keyword is a prefix to any of the above types that turns a single value into a one-dimensional array of values. For example, array scalar "terms" declares a parameter named terms that is an array of scalars. The array size is unlimited. Arrays of arrays are not supported; but arrays of structs containing arrays can be used.

For example, a simple material shader containing ambient, diffuse, and specular colors, transparency, an optional array of bump map textures, and an array of lights could be declared in the .mi file as:

    declare "my_material" ( 
        color "ambient", 
        color "diffuse", 
        color "specular", 
        scalar "shiny", 
        scalar "reflect", 
        scalar "transparency", 
        scalar "ior", 
        vector texture "bump", 
        array light "lights" 
    ) 
 

If there is only one array, there is a small efficiency advantage in listing it last. The material shader declared in this example can be used in a material statement like this:


    material "mat1" 
        "my_material" ( 
            "specular" 1.0 1.0 1.0, 
            "ambient" 0.3 0.3 0.0, 
            "diffuse" 0.7 0.7 0.0, 
            "shiny" 50.0, 
            "bump" "tex1", 
            "lights" [ "light1", "light2", "light3" ], 
            "reflect" 0.5 
        ) 
    end material 
 

Note that the parameters can be defined in any order, and that parameters can be omitted. This example assumes that the texture tex1 and the three lights have been defined prior to this material definition. Again, be sure to use the names of the textures and lights, not the names of the texture and light shaders. All names in the above two examples were written as strings enclosed in double quotes to disambiguate names from reserved keywords, and to allow special characters in the names that would otherwise be illegal.

When the shader my_material is called, its third argument will be a pointer to a data structure built by mental ray from the declaration and the actual parameters in the .mi file. In order for the C source of the shader to access the parameters, it needs an equivalent declaration in C syntax that must agree exactly with the .mi declaration. The type names can be translated according to the following table:

  .mi syntax        mental ray 1.8 syntax mental ray 1.9 syntax
 
boolean int miBoolean integer int miInteger scalar float miScalar vector Vector miVector transform Trans miMatrix color Color miColor color texture Texture * miTag scalar texture Texture * miTag vector texture Texture * miTag light Light * miTag string char * N/A struct struct struct

It is strongly recommended to use the same parameter names in the C declaration as in the .mi declaration.

Arrays are more complicated than the types in this table because the size of the array is not known at declaration time. In mental ray 1.8 syntax, an array keyword in the .mi declaration must be declared as a pointer to the appropriate type followed by an integer with the same name with n_ prepended. mental ray will store the number of elements in the array pointed to by the pointer in the integer. If the array is empty, both the pointer and the integer will be 0.

In mental ray 1.9, the C declaration consists of a start index prefixed with i_, the size of the array prefixed with n_, and the array itself, declared as a pointer.Future versions of mental ray will change this pointer to an array of size [1]; see below. mental ray will allocate the structure as large as required by the actual array size at call time. To access array element i in the range 0 ...n_array, the C expression array[i + i_array] must be used. This expression allows mental ray 1.9 to store the user parameters in virtual shared memory regardless of the base address of the user parameter structure, which is different on every host on the network.

For the above example .mi declaration, the equivalent C structure declaration using mental ray types looks like this:

    struct my_material { 
        Color ambient; 
        Color diffuse; 
        Color specular; 
        miScalar shiny; 
        miScalar reflect; 
        miScalar transparency; 
        miScalar ior; 
        Texture *bump; 
        Light **lights; 
        int n_lights; 
    }; 
 

while the equivalent declaration using mental ray 1.9 types is:

    struct my_material { 
        miColor ambient; 
        miColor diffuse; 
        miColor specular; 
        miScalar shiny; 
        miScalar reflect; 
        miScalar transparency; 
        miScalar ior; 
        miTag bump; 
        int i_lights; 
        int n_lights; 
        miTag *lights; 
    }; 
 

Note that here the order of structure members must match the order in the .mi declaration exactly. For example, suppose a shader has a .mi declaration containing an array of integers:

     declare "myshader" ( array integer "list" )

The C declaration for the shader's parameters is:

     struct myshader {
          int       i_list;
          int       n_list;
          miInteger *list;
     };

A shader that needs to operate on this array, for example printing all the array elements to stdout, would use a loop like this:

     int i;
     for (i=0; i < paras->n_list; i++)
          printf("%d\n", paras->list[paras->i_list + i]);

assuming that paras is the third shader argument and has type struct myshader *. (Note that printf requires that stdio.h is included.) The use of the i_list parameter may seem strange to C programmers, who may wish to hide it in a macro like

     #define EL(array,nel) array[i_##array + nel]

This macro requires an ANSI C preprocessor; K&R preprocessors do not support the ## notation and should use /**/ instead. This macro is not predefined in shader.h. The reason for this peculiar way of accessing arrays is improved performance. Future versions of mental ray will not use a pointer to the array, as in *list above, but an array of size 1, like list[1]. The array list[1] has space for only one element, because the actual number of array elements depends on the shader instance in the .mi file, which may list an arbitrary number of elements. Since future versions of mental ray (2.0 and later) are based on a virtual shared database that moves pieces of data such as shader parameters transparently from one host to another, no such piece of data may contain a pointer. Pointers would not be valid in another host's virtual address space. Adjusting the pointer on the other host is impractical because it would significantly reduce performance for some scenes, and would require knowledge of the structure layout for finding the pointers that may not be available in versions of mental ray not based on a .mi front-end parser. Therefore, the array is appended to the parameter structure, so the entire block can be moved to another host in a single network transfer. It is safe to access the first element of the array, because space for it is always allocated by declarations such as list[1], but the second is a problem because in a C declaration like

     struct myshader {
          int       i_list;
          int       n_list;
          miInteger list[1];
          miScalar  factor;
          miBoolean bool;
     };

the second element, list[1], occupies the same address as factor, and the third overlays bool. The situation becomes more complex for arrays of structures. The solution is to put the value of the first element after the last ``regular'' shader parameter, bool in this example, followed by the other element values. This means that the first few C array elements that overlay other parameters must be skipped. The i_ variable tells the shader writer exactly how many. In the example, i_list would be 3. Assuming the following shader instance, used as part of a material, texture, or some other definition requiring a shader call:

     "myshader" (
          "factor"  1.4142136,
          "list"  [ 42, 123, 486921, 777 ],
          "bool"    on
     )

mental ray would arrange the values in memory like this:

This diagram assumes that a miScalar uses four bytes; this may not be true in all versions. If it used eight bytes, four bytes of padding would be inserted before it by mental ray and the C compiler, and i_list would have the value 5.

There is one exception to shader parameter passing that can be a hard-to-find source of errors. If a shader is called with no parameters in the .mi file, using an opening parenthesis directly followed by a closing parenthesis, the shader will receive a zero-sized parameter block instead of a zero-filled parameter block. This is done to support an optimization for shadow shaders: a shadow shader called with no parameters is called with the parameters of the material shader. This reduces memory consumption because the shadow shader and the material shader almost always have the same parameters, which can be quite large. The problem occurs if a shader other than a shadow shader is called with no parameters because there is no material shader whose parameters could be substituted.

The following sections discuss the various types of shaders, and how to write custom shaders of those types. Basic concepts are introduced step by step, including supporting functions and state variables supplied by mental ray. All support functions are summarized at the end of this chapter. All descriptions apply to mental ray 1.9.


Material Shaders

Material shaders are the primary type of shaders. All materials defined in the scene must at least define a material shader. Materials may also define other types of shaders, such as shadow, volume, and environment shaders, which are optional and of secondary importance.

When mental ray casts a visible ray, such as those cast by the camera (called primary rays) or those that are cast for reflections and refractions (collectively called secondary rays), mental ray determines the next object in the scene that is hit by that ray. This process is called intersection testing. For example, when a primary ray cast from the camera through the viewing plane's pixel (100,100) intersects with a yellow sphere, pixel (100, 100) in the output image will be painted yellow. (The actual process is slightly complicated by supersampling, which can cause more than one primary ray to contribute to a pixel.)

The core of mental ray has no concept of ``yellow''. This color is computed by the material shader attached to the sphere that was hit by the ray. mental ray records general information about the sphere object, such as point of intersection, normal vector, transformation matrix etc. in a data structure called the state, and calls the material shader attached to the object. More precisely, the material shader, along with its parameters (called user parameters), is part of the material, which is attached to the polygon or surface that forms the part of the object that was hit by the ray. Objects are usually built from multiple polygons and/or surfaces, each of which may have a different material.

The material shader uses the values provided by mental ray in the state and the variables provided by the .mi file in the user parameters to calculate the color of the object, and returns that color. In the above example, the material shader would return the color yellow. mental ray stores this color in the frame buffer and casts the next primary ray. Note that if the material shader has a bug that causes it to return infinity or NaN (Not a Number) in the result color, the infinity or NaN is stored as 1.0 in integer color frame buffers. This usually results in white pixels in the rendered image. This is true for subshaders such as texture shaders also.

If an appropriate output statement (see the scene decsription chapter), mental ray computes depth, label, and normal-vector frame buffers in addition to the standard color frame buffer. The color returned by the first-generation material shader is stored in the color frame buffer (unless a lens shader exists; lens shaders also have the option of modifying colors). The material shader can control what gets stored in the depth, label, and normal-vector frame buffers by storing appropriate values into state-> point.z, state->label, and state->normal, respectively. Depth is the negative Z coordinate.

Material shaders normally do quite complicated computations to arrive at the final color of a point on the object:

(see texture)

  • The user parameters usually contain constant ambient, diffuse, and specular colors and other parameters such as transparency, and optional textures that need to be evaluated to compute the actual values at the intersection point. If textures are present, texture shaders are called by using one of the lookup functions provided by mental ray. Texture shaders are discussed in the next section.

  • The illumination computation sums up the contribution from various light sources listed in the user parameters. To obtain the amount of light arriving from a light source, a light shader is called by calling a light trace or sample function provided by mental ray. Light shaders are discussed in a separate section below. After the illumination computation is finished, the ambient, diffuse, and specular colors have been combined into a single material color (assuming a more conventional material shader).

(see secondary ray)
  • If the material is reflective, transparent, or using refraction, as indicated by appropriate user parameters, the shader must cast secondary rays and apply the result to the material color calculated in the previous step. (Transparency is a variation of refractive transparency where the ray continues in the same direction, while refraction rays may alter the direction based on an index of refraction.) Secondary rays, like primary rays, cause mental ray to do intersection testing and call another material shader if the intersection test hit an object. For this reason, material shaders must be re-entrant. In particular, a secondary refraction or transparency ray will hit the back side of the same object if face both is set in the view and the object is a closed volume.

Note that the user parameters of a material shader are under no obligation to define and use classical parameters like ambient, diffuse, and specular color and reflection and refraction parameters. Here is a full-featured example for the C source of the shader declared in the previous section:

(see mi_call_shader) (see mi_sample_light) (see mi_reflection_dir) (see mi_refraction_dir) (see mi_trace_reflection) (see mi_trace_refraction) (see mi_trace_environment)

     miBoolean my_material(
        miColor            *result,
        miState            *state,
        struct my_material *paras)
     {
        miVector           bump, dir;
        miColor            color;
        int                num;
        miTag              *light;
        miScalar           ns;

        /*
         * bump map
         */
        state->tex = state->tex_list[0];
        (void)mi_call_shader((miColor *)&bump,
                             miSHADER_TEXTURE,
                             state, paras->bump);

        if (bump.x != 0 || bump.y != 0) {
            mi_vector_to_object(&state->normal,
                                &state->normal);
            state->normal.x+=bump.x * state->bump_x_list->x
                            +bump.y * state->bump_y_list->x;
            state->normal.y+=bump.x * state->bump_x_list->y
                            +bump.y * state->bump_y_list->y;
            state->normal.z+=bump.x * state->bump_x_list->z
                            +bump.y * state->bump_y_list->z;
            mi_vector_from_object(&state->normal,
                                  &state->normal);
            mi_vector_normalize(&state->normal);
            state->dot_nd = mi_vector_dot(&state->normal,
                                          &state->dir);
        }

        /*
         * illumination
         */
        *result = paras->ambient;
        for (num=0; num < paras->n_lights; num++) {
            miColor    color, sum;
            miInteger  samples = 0;
            miScalar   dot_nl;

            sum.r = sum.g = sum.b = 0;
            light = paras->lights[paras->i_lights + num];
            while (mi_sample_light(&color, &dir, &dot_nl,
                                  state, light, &samples)) {

                 sum.r += dot_nl*paras->diffuse.r * color.r;
                 sum.g += dot_nl*paras->diffuse.g * color.g;
                 sum.b += dot_nl*paras->diffuse.b * color.b;

                 ns = mi_phong_specular(paras->shiny,
                                        state, &dir);
                 sum.r += ns * paras->specular.r * color.r;
                 sum.g += ns * paras->specular.g * color.g;
                 sum.b += ns * paras->specular.b * color.b;
            }
            if (samples) {
                 result->r += sum.r / samples;
                 result->g += sum.g / samples;
                 result->b += sum.b / samples;
            }
        }
        result->a = 1;

        /*
         * reflections
         */
        if (paras->reflect > 0) {
            miScalar f = 1 - paras->reflect;
            result->r *= f;
            result->g *= f;
            result->b *= f;

            mi_reflection_dir(&dir, state);
            if (mi_trace_reflection (&color,state,&dir) ||
                mi_trace_environment(&color,state,&dir)) {

                 result->r += paras->reflect * color.r;
                 result->g += paras->reflect * color.g;
                 result->b += paras->reflect * color.b;
            }
        }

        /*
         * refractions
         */
        if (paras->transparency > 0) {
            miScalar f = 1 - paras->transparency;
            result->r *= f;
            result->g *= f;
            result->b *= f;
            result->a  = f;

            if (mi_refraction_dir(&dir,state,1.0,state->ior)
             && mi_trace_refraction (&color,state,&dir) ||
                mi_trace_environment(&color,state,&dir))) {

                 result->r += paras->transparency * color.r;
                 result->g += paras->transparency * color.g;
                 result->b += paras->transparency * color.b;
                 result->a += paras->transparency * color.a;
            }
        }
        return(miTRUE);
     }

Four steps are required for computing the material color in this shader. First, the normal is perturbed by looking up a vector in the vector texture, and using the bump basis vectors to determine the orientation of the perturbation (the lookup always returns an XY vector). The second step loops over all light sources in the light array parameter, adding the contribution of each light according to the Phong equation. In the case of area lights, the light is sampled more than once, until the light sampling function is satisfied.

Finally, reflection and refraction rays are cast if the appropriate parameters are nonzero. In both cases, first the direction vector dir is computed using a built-in function, and a ray is cast in that direction. If either trace function returns miFALSE, indicating that no object was hit, the material's environment map that forms a sphere around the entire scene is evaluated. (Note that if the material has no environment map, the environment map in the state defaults to the environment shader from the view, if present.) When all computations are finished, the calculated color, including the alpha component, is returned in the result parameter. The shader returns miTRUE indicating that the computation succeded.


Texture Shaders

Texture shaders evaluate a texture and return a color. Textures can either be procedural, for example evaluating a 3D texture based on noise functions or calling other shaders, or they can do an image lookup. The .mi format provides different texture statements for these two types, one with a function call and one with a texture file name. Refer to the scene description for details.

Texture shaders are not first-class shaders, mental ray never calls one by itself and provides no special support for them. Texture shaders are called exclusively by other shaders. There are three ways of calling a texture shader from a material shader or other shaders: either by simply calling the shader by name like any other C function, or by using a built-in convenience function like mi_lookup_color_texture, or by a statement like

(see mi_call_shader)

     mi_call_shader(result, miSHADER_TEXTURE, state, tag);

The tag argument references the texture function. The texture function is a data structure in the scene database that contains a reference to the C function itself, plus a user parameter block that is passed to the texture shader when it is called. All textures listed in the .mi scene description file are installed as texture shaders callable only with the mi_call_shader method, because only then can user parameters be passed. Although the texture shader could also be called directly with a statement such as

     soft_color(result, state, &soft_color_paras);

the caller would have to write the required arguments into the user argument structure soft_color_paras itself; it would not have access to user parameters specified in the .mi file. Also, this call (see shader call tree) does not copy the state, as mi_call_shader does.

Unlike material shaders, texture shaders return a simple color or scalar or other return value. There are no lighting calculations or secondary rays. This greatly simplifies the task of changing a textured surface. For example, a simple texture shader that does a simple, non-antialiased lookup in a texture image could be written as:

(see mi_db_access) (see mi_img_get_color) (see mi_db_unpin)

     miBoolean mytexture(
        register miColor    *result,
        register miState    *state,
        struct image_lookup *paras)
     {
        miImg_image         *image;
        int                 xs, ys;

        image = mi_db_access(paras->texture);
        mi_texture_info(paras->texture, &xs, &ys, 0);
        mi_img_get_color(image, result,
                         state->tex.x * (xs - 1),
                         state->tex.y * (ys - 1));
        mi_db_unpin(paras->texture);
        return(miTRUE);
     }

This shader assumes that the texture coordinate can be taken from state->tex, where the caller (usually a material shader) has stored it, probably by selecting a texture coordinate from state->tex_list. A more complicated shader that can properly anti-alias image textures with a simple box filter, could look like this:

(see tag type) (see mi_db_type) (see mi_call_shader) (see mi_db_access) (see mi_img_get_color) (see mi_db_unpin)

     miBoolean mytexture2(
        register miColor    *result,
        register miState    *state,
        struct image_lookup *paras)
     {
        miImg_image         *image;
        int                 xs, ys;
        miColor             col00, col01, col10, col11;
        register int        x, y;
        register miScalar   u, v, nu, nv;

        image = mi_db_access(paras->texture);
        mi_texture_info(paras->texture, &xs, &ys, 0);
        x = u = state->tex.x * (xs - 2);
        y = v = state->tex.y * (ys - 2);
        u -= x;
        v -= y;
        nu = 1 - u;
        nv = 1 - v;

        mi_img_get_color(image, &col00, x,   y);
        mi_img_get_color(image, &col01, x+1, y);
        mi_img_get_color(image, &col10, x,   y+1);
        mi_img_get_color(image, &col11, x+1, y+1);

        result->r = nv * (nu * col00.r + u * col01.r) +
                     v * (nu * col10.r + u * col11.r);
        result->g = nv * (nu * col00.g + u * col01.g) +
                     v * (nu * col10.g + u * col11.g);
        result->b = nv * (nu * col00.b + u * col01.b) +
                     v * (nu * col10.b + u * col11.b);
        result->a = nv * (nu * col00.a + u * col01.a) +
                     v * (nu * col10.a + u * col11.a);

        mi_db_unpin(paras->texture);
        return(miTRUE);
     }

The implementation of the body of this shader is equivalent to the built-in mi_lookup_color_texture function if called with paras->texture, except that this function also recognizes if the texture is a shader, and calls mi_call_shader in this case.

This shader can further be extended by applying texture transformations to state->tex before it is used for the lookup, for example for rotated, scaled, repeating, or cropped textures. The shader may also decide that a scaled-down texture was missed, and return miFALSE. The material shader must then skip this texture if mi_call_shader returns miFALSE; the built-in SOFTIMAGE material shader does this.

Both of the above shaders have user parameters that consist of a single texture. Textures always have type miTag. Image file textures are read in by the translator and provided as a tag.


Volume Shaders

Volume shaders may be attached to the view or to a material. They modify the color returned from an intersection point to account for the distance the ray traveled through a volume. The most common application for volume shaders is atmospheric fog effects; for example, a simple volume shader may simulate fog by fading the input color to white depending on the ray distance. By definition, the distance dist given in the state is 0.0 and the intersection point is undefined if the ray has infinite length.

(see shader call tree) Volume shaders are normally called in three situations. When a material shader returns, the volume shader that the material shader left in the state->volume variable is called, without copying the state, as if it had been called as the last operation of the material shader. Copying the state is not necessary because the volume shader does not return to the material shader, so it is not necessary to preserve any variables in the state.

Volume shaders are also called when a light shader has returned; in this case the volume shader state->volume is called once for the entire distance from the light source to the illuminated point (i.e., to the point that caused the material shader that sampled the light to be called). Some volume shaders may decide that they should not apply to such light rays; this can be done by returning immediately if the state->type variable is miRAY_LIGHT. Finally, volume shaders are called after an environment shader was called. Note that if a volume shader is called after the material, light, or other shader, the return value of that other shader is discarded and the return value of the volume shader is used. The reason is that a volume shader can substitute a non-black color even the original shader has given up. Volume shaders return miFALSE if no light can pass through the given volume, and miTRUE if there is a non-black result color.

(see material shader) Material shaders have two separate state variables dealing with volumes, volume and refraction_volume. If the material shader casts a refraction or transparency ray, the tracing function will copy the refraction volume shader, if there is one, to the volume shader after copying the state. This means that the next intersection point finds the refraction volume in state->volume, which effectively means that once the ray has entered an object, that object's interior volume shader is used. However, the material shader is responsible to detect when a refraction ray exits an object, and overwrite state->refraction_volume with an appropriate outside volume shader, such as state->camera->volume, or a volume shader found by following the state->parent links.

Since volume shaders modify a color calculated by a previous material shader, environment shader, or light shader, they differ from these shaders in that they receive an input color in the result argument that they are expected to modify. A very simple fog volume shader could be written as:

     miBoolean myfog(
        register miColor      *result,
        register miState      *state,
        register struct myfog *paras)
     {
        register miScalar     fade;

        if (state->type == miRAY_LIGHT)
             return(miTRUE);

        fade = state->dist > paras->maxdist
                   ? 1.0
                   : state->dist / paras->maxdist;

        result->r = fade     * paras->fogcolor.r
                  + (1-fade) * result->r;
        result->g = fade     * paras->fogcolor.g
                  + (1-fade) * result->g;
        result->b = fade     * paras->fogcolor.b
                  + (1-fade) * result->b;
        result->a = fade     * paras->fogcolor.a
                  + (1-fade) * result->a;

        return(miTRUE);
     }

This shader linearly fades the input color to state->fogcolor (probably white) within state->maxdist internal space units. (see atmosphere) Objects more distant are completely lost in fog. The length of the ray to be modified can be found in state->dist, its start point in state->org, and its end point in state->point. This example shader does not apply to light rays, light sources can penetrate fog of any depth because of the miRAY_LIGHT check. In this case, the shader returns miTRUE anyway because the shader did not fail; it merely decided not to apply fog.

If this shader is attached to the view, the atmosphere surrounding the scene will contain fog. Every state->volume will inherit this view volume shader, until a refraction or transparency ray is cast. (see shader call tree) The ray will copy the material's volume shader, state->refraction_volume, if there is one, to state->volume, and the ray is now assumed to be in the object. If the material has no volume shader to be copied, the old volume shader will remain in place and will be inherited by subsequent rays.

Some volume shaders employ ray marching techniques to sample lights from empty space, to achieve effects such as visible light beams. Before such a shader calls mi_sample_light, it should store 0 in state->pri to inform mental ray that there is no primitive to illuminate, and to not attempt certain optimizations such as backface elimination. Shaders other than volume shaders may do this too, but must restore pri before returning.


Environment Shaders

Environment shaders provide a color for rays that leave the scene entirely, and for rays that would exceed the trace depth limit. Environment shaders are called automatically by mental ray if a ray leaves the scene, but not when a ray exceeds the trace depth. This can be done by the shader that tried to cast the ray if the ray-tracing function returned miFALSE, by calling mi_trace_environment:

(see mi_reflection_dir) (see mi_trace_reflection) (see mi_trace_environment)

     mi_reflection_dir(&dir, state);
     if (mi_trace_reflection (&color, state, &dir) ||
         mi_trace_environment(&color, state, &dir))
             /* use the returned color */

This code fragment was taken from the example material shader in the section on materials above. If the mi_trace_reflection call fails, call mi_trace_environment; if that also fails, do not use the returned color. Environment shaders, like any other shader, may return miFALSE to inform the caller that the environment lookup failed.

In both the explicit case and the automatic case (when a ray cast by a function call such as mi_trace_reflection leaves the scene without intersecting with any object) mental ray calls the environment shader found in state->environment. In primary rays, this variable is initialized with the global environment shader in the view (also found in state->camera->environment). (see shader call tree) Subsequent material shaders get the environment defined in the material if present, or the view environment otherwise. Material shaders never inherit the environment from the parent shader, they always use the environment in the material or the view. All other types of shaders inherit the environment from the parent shader.

Here is an example environment shader that uses a texture that covers an infinite sphere around the scene:

    miBoolean myenv(
        register miColor      *result,
        register miState      *state,
        register struct myenv *paras)
    {
        register miScalar     theta;
        miVector              coord;

        theta = fabs(state->dir.z)*HUGE < fabs(state->dir.x)
                ? state->dir.x > 0
                        ? 1.5*M_PI
                        : 0.5*M_PI
                : state->dir.z > 0
                        ? 1.0*M_PI + atan(state->dir.x /
                                          state->dir.z)
                        : 2.0*M_PI + atan(state->dir.x /
                                          state->dir.z);
        if (theta > 2 * M_PI)
             theta -= 2 * M_PI;

        coord.x = 1 - theta / (2 * M_PI);
        coord.y = 0.5 * (state->dir.y + 1.0);
        coord.z = 0;

        state->tex = coord;
        return(mi_call_shader(result, miSHADER_TEXTURE,
                               state, paras->texture));
     }

This shader gets a single parameter in its user parameter structure, a miTag for a texture shader. The texture is evaluated by storing the texture coordinate in state->tex and calling the texture shader with mi_call_shader. For a description of texture shaders and how to call them, see the Texture section above.


Light Shaders

Light shaders are called from other shaders by sampling a light using the mi_sample_light or mi_trace_light functions, which perform some calculations and then call the given light shader. mi_sample_light may also request that it is called more than once if an area light source is to be sampled, at locations chosen by the sampling algorithm chosen by the -mc or -qmc command-line options. For an example for using mi_sample_light, see the section on material shaders above. mi_trace_light performs less exact shading for area lights, and is provided for backwards compatibility only.

The light shader computes the amount of light contributed by the light source to a previous intersection point, stored in state->point. The calculation may be based on the direction state->dir to that point, and the distance state->dist from the light source to that ray. There may also be user parameters that specify directional and distance attenuation. Directional lights have no location; state->dist is undefined in this case.

Light shaders are also responsible for shadow casting. Shadows are computed by finding all objects that are in the path of the light from the light source to the illuminated intersection point. This is done in the light shader by casting ``shadow rays'' after the standard light color computation including attenuation is finished. Shadow rays are cast from the light source back towards the illuminated point, in the same direction of the light ray. Every time an occluding object is found, that object's shadow shader is called, if it has one, which reduces the amount of light based on the object's transparency and color. If an occluding object is found that has no shadow shader, it is assumed to be opaque, so no light from the light source can reach the illuminated point. For details on shadow shaders, see the next section.

Here is an example for a simple point light that supports no attenuation, but casts shadows:

(see mi_trace_shadow)

     miBoolean mypoint(
        register miColor        *result,
        register miState        *state,
        register struct mypoint *paras)
     {
        *result = paras->color;
        return(mi_trace_shadow(result, state));
     }

The user parameters are assumed to contain the light color. The shadows are calculated simply by giving the shadow shaders of all occluding objects the chance to reduce the light from the light source, by calling mi_trace_shadow. The shader returns miTRUE if some light reaches the illuminated point.

The point light can be turned into a spot light by adding directional attenuation parameters for the inner and outer cones and a spot direction parameter to the user parameters, and change the shader to reduce the light intensity if the illuminated point falls between the inner and outer cones, and turns the light off if it doesn't fall into the outer cone at all:

(see mi_trace_shadow)

     miBoolean myspot(
        register miColor           *result,
        register miState           *state,
        register struct soft_light *paras)
     {
        register miScalar          d, t;

        *result = paras->color;

        d = mi_vector_dot(&state->dir, ¶s->direction);
        if (d <= 0)
             return(miFALSE);
        if (d < paras->outer)
             return(miFALSE);
        if (d < paras->inner) {
             t = (paras->outer - d) /
                 (paras->outer - paras->inner);
             result->r *= t;
             result->g *= t;
             result->b *= t;
        }
        return(mi_trace_shadow(result, state));
     }

Again, miFALSE is returned if no illumination takes place, and miTRUE otherwise. Note that none of these light shaders takes the normal at the illuminated point into account; the light shader is merely responsible for calculating the amount of light that reaches (see material shader) that point. The material shader (or other shader) that sampled the light must use the dot_nd value returned by mi_sample_light, and its own user parameters such as the diffuse color, to calculate the actual fraction of light reflected by the material.


Shadow Shaders

As described in the previous section, light shaders may trace a shadow ray from the light source to the point to be illuminated. When this ray hits an occluding object, that object's shadow shader is called, if present. (If the object has no shadow shader, the object is assumed to block all light.) Shadow shaders accept an input color that is dimmed according to the transparency and color of the occluding object.

If there is more than one occluding object between the light source and the illuminated point, the order in which the shadow shaders of the occluding objects is called is undefined, unless the shadow_sort option in the view is turned on. Shadow shaders that rely on being called in the correct order, the one closest to the light source first, should check that state->shadow_sort is miTRUE, and abort with a fatal error message otherwise, telling the user to turn on shadow sorting.

If a new material shader is written, it is often necessary to also write a matching shadow shader. The shadow shader performs a subset of the calculations done in the material shader: it may evaluate textures and transparencies, but it will not sample lights and it will not cast rays. The shader writer can either write a separate shadow shader, or let the material shader double as shadow shader by building the scene such that the material shader appears twice in the material definition. SOFTIMAGE shaders take the latter approach. It relies on the material shader to omit all calculations that are necessary only in the material shader when called as a shadow shader. The shader can find out whether it is called as a material shader or as a shadow shader by checking if state->type is miRAY_SHADOW: if yes, this is a shadow shader. This sharing of shaders pays off only when the texture computations are very complicated, as is the case in SOFTIMAGE materials.

The following shadow shader is a separate shader that attenuates the light that passes through the object based on two user parameters, the diffuse color and the transparency. Material shaders usually also have ambient and specular colors, but the best approach is to pass the diffuse color to shadow shaders because it describes the ``true color'' of the object best. Note that the scene can be arranged such that although the shadow shader is separate from the material shader, (see shader parameters) it still gets a copy of the material shader's user parameters so the shadow shader can access the ``true'' material parameters. In a .mi file, this is done by declaring the shadow shader with no parameters and naming none in the shadow statement in the material definition (just give ()). This sharing of parameters even if the shader itself is not shared can save duplicating a large set of parameters.

     miBoolean myshadow(
        register miColor         *result,
        register miState         *state,
        register struct myshadow *paras)
     {
        register miScalar        opacity;
        register miScalar        f, omf;

        opacity = 1 - paras->transp;
        if (opacity < 0.5) {
             f = 2 * opacity;
             result->r *= f * diffuse.r;
             result->g *= f * diffuse.g;
             result->b *= f * diffuse.b;
        } else {
             f = 2 * (opacity - 0.5);
             omf = 1 - f;
             result->r *= f + omf * diffuse.r;
             result->g *= f + omf * diffuse.g;
             result->b *= f + omf * diffuse.b;
        }
        return(result->r != 0 ||
               result->g != 0 ||
               result->b != 0);
     }

The org variable in the state always contains the position of the light source responsible for casting the shadow rays; the point state variable contains the point on the shadow-casting object. The dist state variable is the distance to the light source (except for directional lights, which have no origin).


Lens Shaders

Lens shaders are called for primary rays from the camera. The camera is normally a simple pinhole camera. A lens shader modifies the origin and direction of a primary ray from the camera. More than one lens shader may be attached to the camera; each modifies the origin and direction calculated by the previous one. By convention, all rays up to and including the one leaving the last lens are called ``primary rays''. The origin and direction input parameters can be found in the state, in the origin and dir variables. The outgoing ray is cast with (see mi_trace_eye) mi_trace_eye, whose return color may be modified before the shader itself returns. Lens shaders are called recursively; a call to mi_trace_eye will call the next lens shader if there is another one.

Here is a sample lens shader that implements a fish-eye lens:

(see mi_trace_eye)

     miBoolean fisheye(
        register miColor  *result,
        register miState  *state,
        register void     *paras)
     {
        register miVector camdir, dir;
        register miScalar x, y, r, t;

        mi_vector_to_camera(state, &camdir, &state->dir);
        t = state->camera->focal / -camdir.z /
                                   (state->camera->aperture/2);
        x = t * camdir.x;
        y = t * camdir.y * state->camera->aspect;
        r = x * x + y * y;
        if (r < 1) {
             dir.x = camdir.x * r;
             dir.y = camdir.y * r;
             dir.z = -sqrt(1 - dir.x*dir.x - dir.y*dir.y);
             return(mi_trace_eye(result, state,
                                 &state->point, &dir));
        } else {
             result->r = result->g =
             result->b = result->a = 0;
             return(miFALSE);
        }
     }

This shader does not take the image aspect ratio into account, and is not physically correct. It merely bends rays away from the camera axis depending on their angle to the camera axis. Rays that fall outside the circle that touches the image edges are set to black (note that alpha is also set to 0). The rays are bent according to the square of the angle, which approaches the physically correct deflection for small values of . This example shader has no user parameters, which is why the type of the paras parameter is void *.


Output Shaders

Output shaders are functions that are run after rendering has finished. They modify the resulting image or images. Typical uses are output filters and compositing operations. Since rendering has completed, the state variables are not available in an output shader; an output shader uses a simple structure called miOutstate:

 type               name                content 
 
int xres image X resultion in pixels int yres image Y resultion in pixels miImg_image * frame_rgba RGBA color frame buffer miImg_image * frame_z depth frame buffer miImg_image * frame_n normal-vector frame buffer miImg_image * frame_label label frame buffer miCamera * camera camera miRc_options * options options miMatrix camera_to_world world transformation miMatrix world_to_camera inverse world transformation

All frame buffers have the same resolution of xres *yres pixels. The four frame buffers are passed for use by the frame buffer access functions, mi_img_get_color, mi_img_put_color, etc. For each type of frame buffer, there are functions to retrieve and store a pixel value that accept the frame buffer pointer as their first argument. All output shaders must be declared like any other type of shader, and the same types of arguments can be declared. This includes textures and lights. Nonprocedural textures can be looked up using functions like mi_lookup_color_texture and mi_texture_info, and lights can be looked up with mi_light_info. Since rendering has completed, it is not possible to look up procedural textures or to use tracing functions such as mi_sample_light.

Output shaders are called with two arguments, the output shader state and the shader parameters. There is no result argument like for the other types of shaders; output shaders do not return a value. By convention, they should still be declared as miBoolean, although the return value is discarded by mental ray. Here is a typical output shader C declaration:

     miBoolean my_output(
        miOutstate       *state,
        struct my_output *paras)

The my_output parameter data structure is defined normally, matching the declaration of the my_output shader in the .mi file. Here is a simple output shader that depth-fades the rendered image towards total transparency: first, the C code is written:

     #include <shader.h>

     struct out_depthfade {
        miScalar   near;   /* no fade closer than this */
        miScalar   far;    /* farther objects disappear */
     };

     miBoolean out_depthfade(
        register miOutstate   *state,
        struct out_depthfade  *paras)
     {
        register int          x, y;
        miColor               color;
        miScalar              depth, fade;

        for (y=0; y < state->yres; y++)
             for (x=0; x < state->xres; x++) {
                  mi_img_get_color(state->frame_rgba,
                                   &color, x, y);
                  mi_img_get_depth(state->frame_z,
                                   &depth, x, y);

                  if (depth >= paras->far || depth == 0.0)
                       color.r=color.g=color.b=color.a = 0;
                  else if (depth > paras->near) {
                       fade = (paras->far - depth) /
                              (paras->far - paras->near);
                       color.r *= fade;
                       color.g *= fade;
                       color.b *= fade;
                       color.a *= fade;
                  }
                  mi_img_put_color(state->frame_rgba,
                                   &color, x, y);
             }
        return(miTRUE);
     }

This shader is stored in a file out_depthfade.c and installed in the .mi file with a code statement and a declaration:

     code "out_depthfade.c"
     declare "out_depthfade" (scalar "near", scalar "far")

This declaration should appear before the frame statement of the first frame using this shader. The shader is referenced in a output statement in the view:

     view
         output "rgba,z" "out_depthfade"
                              ("near" 10.0, "far" 100.0)
         output "pic" "filename.pic"
         min samples 0
         max samples 0
         ...

Note that the output shader statement appears before the output file statement. The output shader must get a chance to change the output image before it is written to the file filename.pic. It is possible to insert another file output statement before the output shader statement; in this case two files would be written, one with and one without depth fading.

Note also that the output shader has a type string "rgba,z". This string tells mental ray to render both an RGBA and a Z (depth) frame buffer. The RGBA buffer would have been rendered anyway because the file output statement requires it, but the depth buffer would not have been rendered without the z in the type string. In this case, all depth values returned by mi_img_get_depth would be 0.0.

The min samples parameter should be set to 0 or greater, because otherwise there might be fewer than one sample per pixel, leaving gaps in the depth frame buffer. mental ray interpolates only the color frame buffer to ``bridge'' unsampled pixels; depths, normals, and labels cannot be interpolated by their nature. Either way, this shader does not anti-alias very well because there is only one depth value per pixel.

The shader makes pixels for which a depth of 0.0 is returned totally transparent to fade edges of objects correctly that have no other object behind them. By definition, mi_img_get_depth returns 0.0 for a position x, y if no object was hit at that pixel. This may be true for anti-aliased edges because the last subsample shot for that pixel may happen to miss the object, and only the last sample for a pixel is stored in the depth frame buffer.


Functions for Shaders

mental ray 1.9 makes a range of functions available to shaders that can be used to access data, cast rays, looking up images, performing standard mathematical computations. The functions are grouped by the module that supplies them. The shader writer may also use C library functions, but it is very important to include <stdio.h> and <math.h> if printing functions such as printf or math functions such as sin are used. Not including these headers may abort rendering at runtime, even though the compiler did not complain. All shaders must include the standard mental ray header file, mi_shader.h.

Here is a summary of functions provided by mental ray:
RC Functions

  type      name                arguments 
 
miBoolean mi_trace_eye *result, *state, *org, *dir miBoolean mi_trace_reflection *result, *state, *dir miBoolean mi_trace_refraction *result, *state, *dir miBoolean mi_trace_transparent *result, *state miBoolean mi_trace_environment *result, *state, *dir miBoolean mi_trace_light *result, *dir, *nl, *st, i miBoolean mi_sample_light *result, *dir, *nl, *st,i,*s miBoolean mi_trace_shadow *result, *state miBoolean mi_call_shader *result, type, *state, tag


DB

  type      name                arguments 
 
int mi_db_type tag void * mi_db_access tag void mi_db_unpin tag void mi_db_flush tag


IMG Functions

  type      name                arguments 
 
void mi_img_put_color *image, *color, x, y void mi_img_get_color *image, *color, x, y void mi_img_put_scalar *image, scalar, x, y void mi_img_get_scalar *image, *scalar, x, y void mi_img_put_vector *image, *vector, x, y void mi_img_get_vector *image, *vector, x, y void mi_img_put_depth *image, depth, x, y void mi_img_get_depth *image, *depth, x, y void mi_img_put_normal *image, *normal, x, y void mi_img_get_normal *image, *normal, x, y void mi_img_put_label *image, label, x, y void mi_img_get_label *image, *label, x, y


Math Functions

  type      name                arguments 
 
void mi_vector_neg *r void mi_vector_add *r, *a, *b void mi_vector_sub *r, *a, *b void mi_vector_mul *r, f void mi_vector_div *r, f void mi_vector_prod *r, *a, *b miScalar mi_vector_dot *a, *b miScalar mi_vector_norm *a void mi_vector_normalize *r void mi_vector_min *r, *a, *b void mi_vector_max *r, *a, *b miScalar mi_vector_det *a, *b, *c miScalar mi_vector_dist *a, *b void mi_matrix_ident r miBoolean mi_matrix_invert r void mi_matrix_prod r, a, b void mi_matrix_rotate a, x, y, z double mi_random
void mi_point_transform *r, *a, m void mi_vector_transform *r, *a, m void mi_point_to_world *state, *r, *v void mi_point_to_camera *state, *r, *v void mi_point_to_object *state, *r, *v void mi_point_from_world *state, *r, *v void mi_point_from_camera *state, *r, *v void mi_point_from_object *state, *r, *v void mi_vector_to_world *state, *r, *v void mi_vector_to_camera *state, *r, *v void mi_vector_to_object *state, *r, *v void mi_vector_from_world *state, *r, *v void mi_vector_from_camera *state, *r, *v void mi_vector_from_object *state, *r, *v


Auxiliary Functions

  type      name                arguments 
 
void mi_reflection_dir *dir, *state miBoolean mi_refraction_dir *dir, *state, *in, *out double mi_fresnel n1, n2, t1, t2
double mi_fresnel_reflection *state, *i, *o double mi_phong_specular spec, *state, *dir double mi_blinn_specular spec, *state, *dir void mi_fresnel_specular *ns, *ks, s, *st, *dir, *in, *out double mi_fresnel_reflection *state, *i, *o double mi_phong_specular spec, *state, *dir double mi_blinn_specular spec, *state, *dir void mi_fresnel_specular *ns, *ks, s, *st, *dir, *in, *out double mi_spline t, n, *ctl double mi_noise_1d p double mi_noise_2d u, v double mi_noise_3d *p double mi_noise_1d_grad p, *g double mi_noise_2d_grad u, v, *gu, *gv double mi_noise_3d_grad *p, *g
miBoolean mi_lookup_color_texture *col, *state, tag, *v miBoolean mi_lookup_scalar_texture *scal, *state, tag, *v miBoolean mi_lookup_vector_texture *vec, *state, tag, *v miBoolean mi_lookup_color_texture *col, *state, tag, *v miBoolean mi_lookup_scalar_texture *scal, *state, tag, *v miBoolean mi_lookup_vector_texture *vec, *state, tag, *v void mi_light_info tag, *org, *dir, **paras void mi_texture_info tag, *xres, *yres, **paras miBoolean mi_tri_vectors *state, wh, nt, **a, **b, **c


Memory Allocation

  type      name                arguments 
 
void * mi_mem_allocate size void * mi_mem_reallocate mem, size void mi_mem_release mem void mi_mem_check void mi_mem_dump mod


Thread Parallelism and Semaphores

  type      name                arguments 
 
void mi_init_lock *lock void mi_delete_lock *lock void mi_lock lock void mi_unlock lock int mi_par_localvpu int mi_par_nthreads


Messages and Errors

  type      name                arguments 
 
void mi_fatal *message, ... void mi_error *message, ... void mi_warning *message, ... void mi_info *message, ... void mi_progress *message, ... void mi_debug *message, ... void mi_vdebug *message, ...

Note that many of these functions return double instead of miScalar, or have double parameters. This allows these functions to be used from shaders written in classic (K&R) C, which always promotes floating-point arguments to double.


RC Functions

These are the functions supplied by the Rendering Core of mental ray, RC. All following trace functions return miTRUE if any subsequent call of a shader returned miTRUE to indicate presence of illumination. Otherwise no illumination is present and miFALSE is returned. (see shader call tree) All trace functions derive from the given state of the parent ray the state of the ray to be cast. The state is always copied, and the given state is not modified. This state is passed to subsequent calls of shaders, which are eventually a lens (see lens shader) , material (see material shader) , light (see light shader) , environment shader, and in the case of material, light, and environment shaders optionally a volume shader. The volume shader gets the same state as the previous (material) shader. Note that all point and direction vectors passed as arguments to tracing functions must be in internal space.

     miBoolean mi_trace_eye(
                miColor         *result,
                miState         *state,
                miVector        *origin,
                miVector        *direction)

casts an eye ray from origin in direction, or calls the next lens shader. The allowed origin and direction values are restricted when using ray classification. If scanline is turned on and state->scanline is not zero, origin and direction must be the same as in the initial call of mi_trace_eye. Lens shaders may not modify them. Origin and directiuon must be given in internal space.

     miBoolean mi_trace_reflection(
                miColor         *result,
                miState         *state,
                miVector        *direction)

casts a reflection ray from state->point to direction. It returns miFALSE if the trace depth has been exhausted. If no intersection is found, the optional environment shader is called. The direction must be given in internal space.

     miBoolean mi_trace_refraction(
                miColor         *result,
                miState         *state,
                miVector        *direction)

casts a refraction ray from state->point to direction. It returns miFALSE if the trace depth has been exhausted. If no intersection is found, the optional environment shader is called. Before this functions casts the refraction ray, after copying the state, it copies state->refraction_volume to state->volume because the ray is now assumed to be ``inside'' the object, so the volume shader that describes the inside should be used to modify the ray while travelling inside the object. It is the caller's responsibility to set state->refraction_volume to the camera's volume shader state->camera->volume or some other volume shader if it determines that the ray has now left the object. The direction must be given in internal space.

     miBoolean mi_trace_transparent(
                miColor         *result,
                miState         *state)

does the same as mi_trace_refraction with dir == state->dir (that is, no change in the ray direction) but may be executed faster if the parent ray is an eye ray. It also works when ray tracing is turned off. If the ray direction does not change (because no index of refraction or similar modification is applied), it is more efficient to cast a transparency ray than a refraction ray. Like mi_trace_refraction, this function copies the refraction volume volume shader because the ray is now assumed to be inside the object.

     miBoolean mi_trace_environment(
                miColor         *result,
                miState         *state,
                miVector        *direction)

casts a ray into the environment. The trace depth is not incremented or checked. The environment shader in the state is called to evaluate the returned color. The direction must be given in internal space.

     miBoolean mi_sample_light(
                miColor         *result,
                miVector        *dir,
                miScalar        *dot_nl,
                miState         *state,
                miTag           light_inst,
                miInteger       *samples)

(see light shader) casts a light ray from the light source to the intersection point, causing the light source's light shader to be called. The light shader may then calculate shadows by casting a shadow ray to the intersection point. This may cause shadow shaders of occluding objects to be called, and will also cause the volume shader of the state to be called, if there is one. Before the light is sampled, the direction from the current intersection point in the state to the light and the dot product of this direction and the normal in the state are calculated and returned in dir and dot_nl if these pointers are nonzero. The direction is returned in internal space. The light instance to sample must be given in light_inst. samples must point to an integer that is initialized to 0. mi_sample_light must be called in a loop until it returns miFALSE. *samples will then contain the total number of light samples taken; it may be larger than 1 for area light sources.

For every call in the loop, a different dir and dot_nl is returned because the rays go to different points on the area light source. The caller is expected to use these variables, the returned color, and other variables such as diffuse and specular colors from the shader parameters to compute a color. These colors are accumulated until mi_sample_light returns miFALSE and the loop terminates. The caller then divides the accumulated color by the number of samples (*samples) if it is greater than 0, effectively averaging all the intermediate results.

Multiple samples are controlled by the -mc or -qmc command-line options. See the section on material shaders for an example. When casting light rays with mi_sample_light, mental ray may check whether the primitive's normal is pointing away from the light and ignore the light in this case. For this reason some shaders, such as ray-marching volume shaders, should assign 0 to state->pri first, and restore it before returning.

     miBoolean mi_trace_light(
                miColor         *result,
                miVector        *dir,
                miScalar        *dot_nl,
                miState         *state,
                miTag           light_inst)

(see light shader) is a simpler variation of mi_sample_light that does not keep a sample counter, and is not called in a loop. It is equivalent to mi_sample_light except for area light sources. Area light sources must be sampled multiple times with different directions.

     miBoolean mi_trace_shadow(
                miColor * const result,
                miState * const state)

(see shadow ray) computes shadows for the given light ray. This function is normally (see light shader) called from a light shader to take occluding objects that prevent some or all of the light emitted by the light source to reach the illuminated (see material shader) point (whose material shader has probably called the light shader). The result color is modified by the shadow shaders that are called if occluding objects are found.

     miBoolean mi_call_shader(
                miColor * const result,
                miShader_type   type,
                miState * const state,
                miTag           shader)

This function calls the shader specified by the tag shader. The tag is normally a texture shader or light shader or some other type of shader found in the calling shader's parameter list. The caller must pass its own state and the shader type, which must be one of miSHADER_LENS, miSHADER_MATERIAL, miSHADER_LIGHT, miSHADER_SHADOW, miSHADER_ENVIRONMENT, miSHADER_VOLUME, and miSHADER_TEXTURE. The sequence of operations is:

1.
shader is written into state->shader. shader is written into state->shader.

2.
If the called shader is dynamically loaded and has an If the called shader is dynamically loaded and has an initialization function that has not been called yet, it is called now, with state as its only argument.

3.
The shader referenced by shader is called with three The shader referenced by shader is called with three arguments: the result pointer, the given state, and the shader parameters retrieved from shader.

4.
After the shader returns, state->shader is After the shader returns, state->shader is restored to its old value.

The return value of the shader is returned. If the shader expects a result argument of a type other than miColor, the pointer must be cast to miColor when passed to mi_call_shader. Note that the shader tag references an entire function call, as defined in the .mi file with a texture, light, or some other statement combining shading function and shader parameters; shader is not just a simple pointer or reference to a C function.

DB Functions

Database access functions can be used to convert pointers into tags, and to get the type of a tag. The scene database contains only tags and no pointers at all, because pointers are not valid on other hosts. All DB functions are available in all shaders, including output shaders.

     int mi_db_type(
                const miTag tag)

Return the type of a database item, or 0 if the given tag does not exist. Valid types that are of interest in shaders are:

 miSCENE_FUNCTION   Function to call, such as a shading function
 miSCENE_MATERIAL   Material containing shaders and flags 
 miSCENE_LIGHT      Light source 
 miSCENE_IMAGE      Image in memory 
 

The most important are functions and images, because general-purpose texture shaders need to distinguish procedural and image textures. (see procedural texture) (see image texture) See the texture shader example above.

     void *mi_db_access(
                const miTag tag)

Look up the tag in the database, pin it, and return a pointer to the referenced item. Pinning means that the database item is guaranteed to stay in memory at the same location until the item is explicitly unpinned. Rendering aborts if the given tag does not exist. mi_db_access always returns a valid pointer. If an item is accessed twice, it must be unpinned twice; pinned is a counter, not a flag. The maximum number of pins is 255.

     void mi_db_unpin(
                const miTag tag)

Every tag that was accessed with mi_db_access must be unpinned with this function when the pointer is no longer needed. Failure to unpin can cause a pin overflow, which aborts rendering. After unpinning, the pointer may not used any more.

     void mi_db_flush(
                const miTag tag)

Normally, a shader does not use a pointer obtained with mi_db_access to write to a database item. If it does, other hosts on the network may still hold stale copies, which must explicitly be deleted by calling this function. This function must be used with great care; it is an error to flush an item that another shader has pinned. For this reason, it is not generally possible to pass information back and forth between shaders or hosts by writing into database items and then flushing them.

IMG Functions

The IMG module of mental ray provides functions that deal with images. There are functions to read and write image files in various formats, and to (see texture) access in-core frame buffers such as image textures. First, the functions that access frame buffers are listed. These functions are (see texture shader) typically used by texture shaders, which can obtain an image pointer by calling mi_db_access with the image tag as an argument. All these functions do nothing or return defaults if the image pointer is 0. They do not check whether the frame buffer has the correct data type. All these functions are available in all shaders, including output shaders.

     void mi_img_put_color(
                miImg_image *image,
                miColor     *color,
                int         x,
                int         y)

Store the color color in the color frame buffer image at coordinate x y, after performing desaturation or color clipping, gamma correction, dithering, and compensating for premultiplication. This function works with 1, 2, or 4 components per pixel, and with 8 or 16 bits per component. The normal range for the R, G, B, and A color components is [0, 1] inclusive.

     void mi_img_get_color(
                miImg_image *image,
                miColor     *color,
                int         x,
                int         y)

This is the reverse function to mi_img_put_color, it returns the color stored in a frame buffer at the specified coordinates. Gamma compensation and premultiplication, if enabled by mi_img_mode, are applied in reverse. The returned color may differ from the original color given to mi_img_put_color because of color clipping and color quantization.

     void mi_img_put_scalar (
                miImg_image *image,
                float       scalar,
                int         x,
                int         y)

Store the scalar scalar in the scalar frame buffer image at coordinate x y, after clipping to the range [0, 1]. Scalars are stored as 8-bit or 16-bit unsigned values. This function is intended for scalar texture files of type miIMG_S or miIMG_S_16.

     void mi_img_get_scalar (
                miImg_image *image,
                float       *scalar,
                int         x,
                int         y)

This is the reverse function to mi_img_put_scalar, it returns the scalar stored in a frame buffer at the specified coordinates, converted to a scalar in the range [0, 1]. If the frame buffer pointer is 0, the scalar is set to 0.

     void mi_img_put_vector (
                miImg_image *image,
                miVector    *vector,
                int         x,
                int         y)

Store the X and Y components of the vector vector in the vector frame buffer image at coordinate x y, after clipping to the range [-1, 1]. Vectors are stored as 16-bit signed values. This function is intended for vector texture files of type miIMG_VTA or miIMG_VTS.

     void mi_img_get_vector (
                miImg_image *image,
                miVector    *vector,
                int         x,
                int         y)

This is the reverse function to mi_img_put_vector, it returns the UV vector stored in a frame buffer at the specified coordinates, with coordinates converted to the range [-1, 1]. The Z component of the vector is always set to 0. If the frame buffer pointer is 0, all components are set to 0.

     void mi_img_put_depth(
                miImg_image *image,
                float       depth,
                int         x,
                int         y)

Store the depth value depth in the frame buffer image at the coordinates x y. The depth value is not changed in any way. The standard interpretation of the depth is the (positive) Z distance of objects relative to the camera. mental ray uses this function internally to store -state->point.z (in camera space) if the depth frame buffer is enabled with an appropriate output statement.

     void mi_img_get_depth(
                miImg_image *image,
                float       *depth,
                int         x,
                int         y)

Read the depth value to the float pointed to by depth from frame buffer image at the coordinates x y. If the image pointer is 0, return the MAX_FLT constant from limits.h.

     void mi_img_put_normal(
                miImg_image *image,
                miVector    *normal,
                int         x,
                int         y)

Store the normal vector normal in the frame buffer image at the coordinates x y. The normal vector is not changed in any way.

     void mi_img_get_normal(
                miImg_image *image,
                miVector    *normal,
                int         x,
                int         y)

Read the normal vector normal from frame buffer image at the coordinates x y. If the image pointer is 0, return a null vector.

     void mi_img_put_label(
                miImg_image *image,
                miUint      label,
                int         x,
                int         y)

Store the label value label in the frame buffer image at the coordinates x y. The label value is not changed in any way.

     void mi_img_get_label(
                miImg_image *image,
                miUint      *label,
                int         x,
                int         y)

Read the label value to the unsigned integer pointed to by label from frame buffer image at the coordinates x y. If the image pointer is 0, return 0.

Math Functions

Math functions include common vector and matrix operations. More specific rendering functions can be found in the next section, Auxiliary Functions.

     void mi_vector_neg(
                miVector    *r)

r := -r

     void mi_vector_add(
                miVector    *r,
                miVector    *a,
                miVector    *b)

r := a + b

     void mi_vector_sub(
                miVector    *r,
                miVector    *a,
                miVector    *b)

r := a - b

     void mi_vector_mul(
                miVector    *r,
                double      f)

r := r *f

     void mi_vector_div(
                miVector    *r,
                double      f)

r := r *1 /f (If f is zero, leave r unchanged.)

     void mi_vector_prod(
                miVector    *r,
                miVector    *a,
                miVector    *b)

r := a *b

     double mi_vector_dot(
                miVector    *a,
                miVector    *b)

a *b

     double mi_vector_norm(
                miVector    *a)

|a |

     void mi_vector_normalize(
                miVector    *r)

r := r /|r | (If r is a null vector, leave r unchanged.)

     void mi_vector_min(
                miVector    *r,
                miVector    *a,
                miVector    *b)

r := ax < bx ? ax : bx ay < by ? ay : by az < bz ? az : bz

     void mi_vector_max(
                miVector    *r,
                miVector    *a,
                miVector    *b)

r := ax > bx ? ax : bx ay > by ? ay : by az > bz ? az : bz

     double mi_vector_det(
                miVector    *a,
                miVector    *b,
                miVector    *c)

ax bx cx ay by cy az bz cz

     double mi_vector_dist(
                miVector    *a,
                miVector    *b)

|a - b|

     void mi_matrix_ident(
                miMatrix     r)

R := 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1

     miBoolean mi_matrix_invert(
                miMatrix     r,
                miMatrix     a)

R := A^-1 (Returns miFALSE if the matrix cannot be inverted.)

     void mi_matrix_prod(
                miMatrix     r,
                miMatrix     a,
                miMatrix     b)

R := A *B

     void mi_matrix_rotate(
                miMatrix        a,
                const double    xrot,
                const double    yrot,
                const double    zrot)

Create a rotation matrix a rotating by xrot, then yrot, then zrot, in radians.

     double mi_random(void)

Return a random number in the range [0, 1).

     void mi_point_transform(
                miVector    *r,
                miVector    *v,
                miMatrix     m)

r := v *A

All fourteen transformation functions may be called with identical pointers r and v. The vector is transformed in-place in this case. If the result of one of the 14 transformations is a homogeneous vector with a w component that is not equal to 1.0, the result vector's x, y, and z components are divided by w. For the multiplication, a w component of 1.0 is implicitly appended to the a vector. If the matrix m is a null pointer, no transformation is done and v is copied to r.

     void mi_vector_transform(
                miVector    *r,
                miVector    *v,
                miMatrix     m)

Same as void mi_point_transform, but ignores the translation row in the matrix.

     void mi_point_to_world(
                miState  * const  state,
                miVector * const  r,
                miVector * const  v)

Convert internal point v in the state to world space, r.

     void mi_point_to_camera(
                miState  * const  state,
                miVector * const  r,
                miVector * const  v)

Convert internal point v in the state to camera space, r.

     void mi_point_to_object(
                miState  * const  state,
                miVector * const  r,
                miVector * const  v)

Convert internal point v in the state to object space, r. For a light, object space is the space of the light, not the illuminated object.

     void mi_point_from_world(
                miState  * const  state,
                miVector * const  r,
                miVector * const  v)

Convert point v in world space to internal space, r.

     void mi_point_from_camera(
                miState  * const  state,
                miVector * const  r,
                miVector * const  v)

Convert point in camera space v to internal space, r.

     void mi_point_from_object(
                miState  * const  state,
                miVector * const  r,
                miVector * const  v)

Convert point v in object space to internal space, r. For a light, object space is the space of the light, not the illuminated object.

     void mi_vector_to_world(
                miState  * const  state,
                miVector * const  r,
                miVector * const  v)

Convert internal vector v in the state to world space, r. Vector transformations work like point transformation, except that the translation row of the transformation matrix is ignored. The resulting vector is not (re-)normalized. Vector transformations transform normals correctly only if there is no scaling.

     void mi_vector_to_camera(
                miState  * const  state,
                miVector * const  r,
                miVector * const  v)

Convert internal vector v in the state to camera space, r.

     void mi_vector_to_object(
                miState  * const  state,
                miVector * const  r,
                miVector * const  v)

Convert internal vector v in the state to object space, r. For a light, object space is the space of the light, not the illuminated object.

     void mi_vector_from_world(
                miState  * const  state,
                miVector * const  r,
                miVector * const  v)

Convert vector v in world space to internal space, r.

     void mi_vector_from_camera(
                miState  * const  state,
                miVector * const  r,
                miVector * const  v)

Convert vector in camera space v to internal space, r.

     void mi_vector_from_object(
                miState  * const  state,
                miVector * const  r,
                miVector * const  v)

Convert vector v in object space to internal space, r. For a light, object space is the space of the light, not the illuminated object.

Auxiliary Functions

The following functions are provided for support of shaders, to simplify common mathematical operations required in shaders:

     void mi_reflection_dir(
                miVector        *dir,
                miState         *state);

Calculate the reflection direction based on the dir, normal, and normal_geom state variables. The returned direction dir can be passed to mi_trace_reflect. It is returned in internal space.

     miBoolean mi_refraction_dir(
                miVector        *dir,
                miState         *state,
                double          ior_in,
                double          ior_out);

Calculate the refraction direction in internal space based on the interior and exterior indices of refraction ior_in and ior_out, and on dir, normal, and normal_geom state variables. The returned direction dir can be passed to mi_trace_refract. Returns miFALSE and leaves *dir undefined in case of total internal reflection.

     double mi_fresnel(
                double          n1,
                double          n2,
                double          t1,
                double          t2);

Calculate the Fresnel factor.

     double mi_fresnel_reflection(
                miState         *state,
                double          ior_in,
                double          ior_out);

Call mi_fresnel with parameters appropriate for the given indices of refraction ior_in and ior_out, and for the dot_nd state variable.

     double mi_phong_specular(
                double          spec_exp,
                miState         *state,
                miVector        *dir);

Calculate the Phong factor based on the direction of illumination dir, the specular exponent spec_exp, and the state variables normal and dir. The direction must be given in internal space.

     double mi_blinn_specular(
                double          spec_exp,
                miState         *state,
                miVector        *dir);

As mi_phong_specular, but attenuated by a geometric attenuation factor (see [Foley 90]).

     void mi_fresnel_specular(
                miScalar        *ns,
                miScalar        *ks,
                double          spec_exp,
                miState         *state,
                miVector        *dir,
                double          ior_in,
                double          iot_out);

Calculate the specular factor ns based on the illumination direction dir, the specular exponent spec_exp, the inside and outside indices of refraction ior_in and ior_out, and the state variables normal and dir. ks is the value returned by mi_fresnel, which is called by mi_fresnel_specular. The direction must be given in internal space.

     double mi_spline(
                double            t,
                const int         n,
                miScalar * const  ctl)

This function calculates a one-dimensional cardinal spline at location t. The t parameter must be in the range 0 ...1. The spline is defined by n control points specified in the array ctl. There must be at least two control points. To calculate multi-dimensional splines, this function must be called once for each dimension. For example, spline can be used three times to interpolate colors smoothly.

     double mi_noise_1d(
                const double     p)

Return a one-dimensional coherent noise function of p. All six noise functions compute a Perlin noise function from the given one, two, or three dimension parameters such that the noise changes gradually with changing parameters. The returned values are in the range 0 ...1, with a bell-shaped distribution centered at 0.5 and falling off to both sides. This means that 0.5 is returned most often, and values of less than 0.0 and more than 1.0 are never returned. See [Perlin 85].

     double mi_noise_2d(
                const double     u,
                const double     v)

Return a two-dimensional noise function of u, v.

     double mi_noise_3d(
                miVector * const p)

Return a three-dimensional noise function of the vector p. This is probably the most useful noise function; a simple procedural texture shader can be written that converts a copy of the state->point vector to object space, passes it to mi_noise_3d, and assigns the returned value to the red, green, and blue components of the result color. The average feature size of the texture will be approximately one unit in space.

     double mi_noise_1d_grad(
                const double     p,
                miScalar * const g)

Return a one-dimensional noise function of p. The gradient of the computed texture at the location p is assigned to *g. Gradients are not normalized.

     double mi_noise_2d_grad(
                const double     u,
                const double     v,
                miScalar * const gu,
                miScalar * const gv)

Return a two-dimensional noise function of u, v. The gradient is assigned to *gu and *gv.

     double mi_noise_3d_grad(
                miVector * const p,
                miVector * const g)

Return a three-dimensional noise function of the vector p. The gradient is assigned to the vector g.

     miBoolean mi_lookup_color_texture(
                miColor         *color,
                miState         *state,
                miTag           tag,
                miVector        *coord)

tag is assumed to be a texture as taken from a color texture parameter of a shader. This function checks whether the tag refers to a shader (procedural texture) or an image, depending on which type of color texture statement was used in the .mi file. If tag is a shader, coord is stored in state->tex, the referenced texture shader is called, and its return value is returned. If tag is an image, coord is brought into the range (0..1, 0..1) by removing the integer part, the image is looked up at the resulting 2D coordinate, and miTRUE is returned. In both cases, the color resulting from the lookup is stored in *color.

     miBoolean mi_lookup_scalar_texture(
                miScalar        *scalar,
                miState         *state,
                miTag           tag,
                miVector        *coord)

This function is equivalent to mi_lookup_color_texture, except that tag is assumed to refer to a scalar texture shader or scalar image, as defined in the .mi file with a scalar texture statement, and a scalar is looked up returned in *scalar.

     miBoolean mi_lookup_vector_texture(
                miVector        *vector,
                miState         *state,
                miTag           tag,
                miVector        *coord)

This function is also equivalent to mi_lookup_color_texture, except that tag is assumed to refer to a vector texture shader or vector image, as defined in the .mi file with a vector texture statement, and a vector is looked up returned in *vector.

     void mi_light_info(
                miTag           tag,
                miVector        *org,
                miVector        *dir,
                void            **paras)

tag is assumed to be a light source as found in a light parameter of a shader. It is looked up, and its origin (location in internal space) is stored in *org, and its direction (also in internal space) is stored in *dir. Since light sources can only have one or the other but not both, the unused vector is set to a null vector. This can be used to distinguish directional (infinite) light sources; their org vector is set to (0, 0, 0). The paras pointer is set to the shader parameters of the referenced light shader; if properly cast by the caller, it can extract information such as whether a non-directional light source is a point or a spot light, and its color and attenuation parameters. (mental ray considers a spot light to be a point light with directional attenuation.) Any of the three pointers org, dir, and paras can be a null pointer.

     void mi_texture_info(
                miTag           tag,
                int             *xres,
                int             *yres,
                void            **paras)

tag is assumed to be a texture as found in a texture parameter of a shader. If tag refers to a procedural texture shader, * xres and *yres are set to 0 and *paras is set to the shader parameters of the texture shader. If tag is an image texture, *xres and *yres are set to the image resolution in pixels, and *paras is set to 0. Any of the three pointers can be a null pointer.

     void mi_tri_vectors(
                miState         *state,
                int             which,
                int             ntex,
                miVector        **a,
                miVector        **b,
                miVector        **c)

All the information in the state pertains to the interpolated intersection point in a triangle. This function can be used to obtain information about the uninterpolated triangle vertices. Together with the barycentric coordinates in the state, parameters retrieved with mi_tri_vectors may be interpolated differently by the shader. The which argument is a character that controls which triple of vectors is to be retrieved:

'p'
the points in space the points in space

'n'
the normal vectors the normal vectors

'm'
the motion vectors the motion vectors

't'
the texture coordinates of texture space ntex the texture coordinates of texture space ntex

'u'
the U bump basis vectors the U bump basis vectors

'v'
the V bump basis vectors the V bump basis vectors

A pointer to the vectors is stored in *a, *b, and *c. The shader may not modify these vectors. They are stored in internal space. If the requested triple is not available, miFALSE is returned.

Memory Allocation

mental ray's memory allocation functions replace the standard malloc packages found on most systems. They have built-in functions for memory leak tracing and consistency checks, and handle error automatically.

     void *mi_mem_allocate(
                const int     size)

Accepts one argument specifying the size of the memory to allocate. A pointer to the allocated memory is returned. If the allocation fails, an error is reported automatically, and mental ray is aborted. This call is guaranteed to return a valid pointer, or not to return at all. The allocated memory is zeroed.

     void *mi_mem_reallocate(
                void * const  mem,
                const int     size)

Change the size of an allocated block of memory. There are two arguments: a pointer to the old block of memory, and the requested new size of that block. A pointer to the new block is returned, which may be different from the pointer to the old block. If the old pointer was a null pointer, mi_mem_reallocate behaves like mi_mem_allocate. If the new size is zero, mi_mem_reallocate behaves like mi_mem_release, and returns a null pointer. If there is an allocation error, an error is reported and raylib is aborted. Like mi_mem_alloc, mi_mem_reallocate never returns if the re-allocation fails. If the block grows, the extra bytes are undefined.

     void *mi_mem_release(
                void * const  mem)

Frees a block of memory. There is one argument: the address of the block. If a null pointer is passed, nothing is done. There is no return value.

     void *mi_mem_check(void)

This call is currently not available. It will be available in version 2.0.

     void *mi_mem_dump(
                const miModule module)

This call is currently not available. It will be available in version 2.0.

Thread Parallelism and Semaphores

In addition to network parallelism, mental ray also supports shared memory parallelism through threads. Network parallelism is a form of distributed memory parallelism where processes cooperate by exchanging messages. Messages are used to exchange data as well as to synchronize. With shared memory data can easily be exchanged, a process must only access the common memory to do so. A different mechanism has to be used for synchronization. This is usually done by locking. Basically what has to be done is one process has to tell the other that it is waiting to access data, and another process can signal that it has finished working with it, so that any other process may now access it.

By default threads are used on shared memory multiprocessor machines. Threads are sometimes also called lightweight processes. Threads behave like processes running on a common shared memory.

Since memory is shared between two threads both can write to memory at the same time. It can also happen that one thread writes while another reads the same memory. Both these cases can lead to surprising unwanted results. Therefore -- to guard against these surprises -- when using threads certain precautions have to be observed. Care has to be taken when using heap memory such as global or static data, as any thread may potentially modify it. To prevent corrupting any data (or reading corrupted data), locking must be used when it is not otherwise guaranteed that concurrent accesses will not occur.

In addition to making sure that write accesses to data are performed when no other thread accesses the data, it is important to use only so-called concurrency safe libraries and calls. If a call to a nonreentrant function is done, locking should be used. A function is called reentrant if it can be executed by multiple threads at the same time without adverse effects. (Reentrancy and concurrency safety are related, but the terms stem from different historical contexts.) Details and examples are explained below.

For example, static data on a shared memory multiprocessor can be modified by more than one processor at a time. Consider this test:

     if (!is_init) {
         is_init = miTRUE;
         initialize();
     }

This does not guarantee that initialize is called only once. The reason is that all threads share the is_init flag, so two threads may simultaneously examine the flag. Both will find that it has not been set, and enter the if body. Next, both will set the flag to miTRUE, and then both will call the initialize function. This situation is called a race condition. The example is contrived because initialization and termination should be done with init and exit functions as described in the next section, but this problem can occur with any heap variable. In general, all threads on a host share all data except local auto variables on the stack.

The behavior described above could also occur if more than one thread is used on a single processor, but by default mental ray does not create more threads then processors are available.

There are two methods for guarding against race conditions. One is to guarantee that only one thread executes certain code at a time. Such code surrounded by lock and unlock operations is called a critical section. Code inside of critical sections may access global or static data or call any function that does so (as long as all is protected by the same lock). The lock used in this example is assumed to have been created and initialized with a call to mi_init_lock before it used here. (See below how locks are initialized.) Here is an example of how a critical section may be used:

     miLock lock;

     mi_lock(lock);
     if (!is_init) {               /* critical section */
         is_init = miTRUE;
         initialize();
     }
     mi_unlock(lock);

The other method is to use separate variables for each thread. This is done by allocating an array with one entry for each thread, and indexing this array with the current thread number. Allocation is done in the shader's initialization routine (which has the same name as the shader with _init appended). No locking is required because it is called only once. The termination routine (which also has the same name but with _exit appended) must release the array.

mental ray provides two locks for general use: state->global_lock is a lock shared by all threads and all shaders. No two critical sections protected by this lock can execute simultaneously on this host. The second is state->shader->lock, which is local to all instances of the current shader. The lock is tied to the shader, not the particular call with particular shader parameters. Every shader in mental ray, built-in or dynamically linked, has exactly one such lock.

The relevant functions provided by the parallelism modules are:

     void mi_init_lock(
                miLock * const  lock)

Before a lock can be used by one of the other locking functions, it must be initialized with this function. Note that the lock variable must be static or global. Shaders will normally use this function in their _init function.

     void mi_delete_lock(
                miLock * const  lock)

Destroy a lock. This should be done when it is no longer needed. The code should use lock and immediately unlock the lock first to make sure that no other thread is in or waiting for a critical section protected by this lock. Shaders will normally use this function in their _exit function.

     void mi_lock(
                const miLock    lock)

Check if any other code holds the lock. If so, block; otherwise set the lock and proceed. This is done in a parallel-safe way so only one critical section locked can execute at a time. Note that locking the same lock twice in a row without anyone unlocking it will block the thread forever, effectively freezing mental ray, because the second lock can never succeed.

     void mi_unlock(
                const miLock    lock)

Release a lock. If another thread was blocked attempting to set the lock, it can proceed now. Locks and unlocks must always be paired, and the code between locking and unlocking must be as short and as fast as possible to avoid defeating parallelism.

     miVpu mi_par_localvpu(void)
     int miTHREAD(miVpu vpu)

The term VPU stands for Virtual Processing Unit. All threads on the network have a unique VPU number. mi_par_localvpu returns the VPU number of the VPU this thread is running on. VPUs are a concatenation of the host number and the thread number, both numbered from 0 to the number of hosts or threads, respectively, minus 1. The miTHREAD macro extracts a thread number from a VPU. Thread 0 on host 0 is normally running the translator that controls the entire operation. The mi_par_localvpu function returns the VPU of the current thread on the local host.

     int mi_par_nthreads(void)

Returns the number of threads on the local host. This is normally 1 on a single-processor system. This number can be used to allocate an array of per-thread variables in the shader initialization code. The array can then be indexed by the shader with miTHREAD(mi_par_localvpu()).

Messages and Errors

Shaders may print messages and errors. They are printed in the same format as rendering (RC) messages. Options given to the translator determine which messages are printed and which are suppressed. All message routines have the same parameters as printf(3). All append a newline to the message. Messages are printed in the form

RC host.thread level: message

with the module name RC, the host number host if available, the thread number thread with a leading dot if available, the message type level (fatal, error, warning etc), and the message given in the function call.

     void mi_fatal(
                const char * const message,
                ...)

An unrecoverable error has occurred. Unlike all others, this call will not return; it will attempt to recover mental ray and return to the top-level translator. Recovering may involve aborting all operations in progress and clearing the entire database. Fatal messages can be suppressed, but mental ray is always re-initialized.

     void mi_error(
                const char * const message,
                ...)

An unrecoverable error has occurred. This call returns; the caller must abort the current operation gracefully and return.

     void mi_warning(
                const char * const message,
                ...)

A recoverable error occurred. The current operation proceeds.

     void mi_info(
                const char * const message,
                ...)

Prints information about the current operation, such as the number of triangles and timing information. Infos should be used sparingly; do not print information for every intersection point or shader call.

     void mi_progress(
                const char * const message,
                ...)

Prints progress reports, such as rendering percentages.

     void mi_debug(
                const char * const message,
                ...)

Prints debugging information useful only for shader development.

     void mi_vdebug(
                const char * const message,
                ...)

Prints more debugging information useful only for shader development. Messages that are likely to be useful only in rare circumstances, or that generate a very large number of lines should be printed with this function.

Initialization and Cleanup

mental ray 1.9 provides a way to define initialization and cleanup functions for each user defined function. Many shaders need to perform operations such as initializing color tables or allocating arrays before rendering starts. They may also need to do cleanup operations after rendering has finished, for operations like releasing storage to prevent memory leaks. Before a shader is called for the first time, ray checks if a function of the same name with _init appended exists. If so, it assumes that this is an initialization routine and calls it once before the first call of the function. The state passed to the initialization function is the same as passed to the first call of the actual shader to be initialized. Note that the order of shader calls is unpredictable because the order of pixel samples is unpredictable, so the initialization function should not rely on sample-specific state variables such as state->point.

The initialization function has the option of requesting shader instance intializations by setting the boolean variable its third argument points to to miTRUE. A shader instance is a unique pair of shader and shader parameters. For example, if the shader soft_material is used in two different materials it is said to have two different instances (even if the parameter values are similar).

When rendering has finished, ray checks for each user provided shader which was called if a function of the same name with _exit appended exists. If yes, it assumes that this is a cleanup routine and calls it once. For example, if a shader myshader exists, the functions myshader_init and myshader_exit are called for initialization and cleanup if they exist.

Both routines are assumed to have the following type:

     void myshader_init(miState   *state,
                        void      *paras,
                        miBoolean *inst_init_req);

     void myshader_exit(void      *paras);

Here is an example for init and exit shaders for a shader named myshader. When myshader is about to be used for the first time in a frame, the calling order is:

1.
myshader_init with a null parameter argument

2.
myshader_init with non-null parameter argument

3.
myshader itself with the same non-null parameter argument

4.
more calls to myshader with the same parameter argument, and calls to other instances of myshader_init and myshader with different parameter arguments

5.
one mi_shader_exit with a non-null parameter argument for each corresponding myshader_init

6.
finally one myshader_exit with a null parameter argument.

Steps 2 and 5 would have been omitted if myshader_init in step 1 had not set its third argument inst_req to miTRUE. Two different instances of the same shader always have different parameter argument pointers. However, a shadow shader and a material shader in the same material may share parameters as described above; in this case both shaders are called with the same parameter argument. Most scenes are built this way.

void myshader_init(             /* must end with "_init" */
    miState         *state,         
    struct myshader *paras,     /* valid for inst inits */
    miBoolean       *inst_req)  /* for inst init request */
{
    if (!paras) {               /* main shader init */
        *inst_req = miTRUE;     /* want inst inits too */
        ...
    } else {                    /* shader instance init */
        paras->something = 1;   /* just an example */
        ...
    }
}

void myshader_exit(             /* must end with "_exit" */
    struct myshader *paras)     /* valid for inst inits */
{
    if (!paras) {               /* main shader exit */
        ...                     /* no further inst exits
                                 * will occur */
    } else {                    /* shader instance exit */
        paras->something = 0;   /* just an example */
        ...
    }
}

Note that there will generally be many instance init/exits (if enabled), but only one shader init/exit. If an init/exit shader isn't available, it isn't called; this is not an error. Initialization and cleanup are done on every host where the function was used, but only once on shared memory parallel machines. They are done for each frame separately.


Shaders and Trace Functions

Trace functions are functions provided by mental ray that allow a shader to cast a ray into the scene, most of them using standard ray tracing. Not all types of tracing functions can be used in all types of shaders. Conversely, many trace functions cause shaders to be called. This chapter lists these interdependencies.

The following list shows which shaders are called from which trace functions:

(see mi_trace_eye) (see mi_trace_reflection) (see mi_trace_refraction) (see mi_trace_transparent) (see mi_trace_environment) (see mi_sample_light) (see mi_trace_shadow)

                         lens    material environ 
 
mi_trace_eye yes yes yes mi_trace_reflection no yes yes mi_trace_refraction no yes yes mi_trace_transparent no yes yes mi_trace_environment no no yes mi_sample_light no no no mi_trace_shadow no no no

                         light   shadow  volume 
 
mi_trace_eye no no yes mi_trace_reflection no no yes mi_trace_refraction no no yes mi_trace_transparent no no yes mi_trace_environment no no yes mi_sample_light yes no yes mi_trace_shadow no yes no

                         lens    material environ light  shadow  volume 
 
mi_trace_eye yes yes yes no no yes mi_trace_reflection no yes yes no no yes mi_trace_refraction no yes yes no no yes mi_trace_transparent no yes yes no no yes mi_trace_environment no no yes no no yes mi_sample_light no no no yes no yes mi_trace_shadow no no no no yes no

(see shader call tree) mental ray's RC module holds internal data corresponding to the ray tree. Therefore shaders may not call arbitrary trace functions, since in RC's data structures entries are only provided for the following children at a node:

eye rays:
(see primary ray) reflection, refraction rays up to trace depth, light rays to each light instance.

reflection, refraction, transparency rays:
reflection, refraction rays up to trace depth, light rays to each light instance.

light rays:
shadow rays.

shadow rays:
(see shadow ray) none.

Environment rays do not have entries in this tree. The data in the tree is used for acceleration and can be overridden if a shader wants to cast rays not normally allowed, by setting the state variable state->cache to zero. This should only be done if necessary because it reduces efficiency. The following table shows which trace functions may be called from which shaders:


                         lens    material environ 
 
mi_trace_eye yes no no mi_trace_reflection ** yes * mi_trace_refraction ** yes * mi_trace_transparent ** yes * mi_trace_environment * yes yes mi_sample_light * yes * mi_trace_shadow ** ** **

                                         ray     light 
                         light   shadow  volume  volume
 
mi_trace_eye no no no no mi_trace_reflection ** ** yes ** mi_trace_refraction ** ** yes ** mi_trace_transparent ** ** yes ** mi_trace_environment * yes yes yes mi_sample_light ** ** yes ** mi_trace_shadow yes ** no yes

                                                                 ray     light 
                         lens    material environ light  shadow  volume  volume
 
mi_trace_eye yes no no no no no no mi_trace_reflection ** yes * ** ** yes ** mi_trace_refraction ** yes * ** ** yes ** mi_trace_transparent ** yes * ** ** yes ** mi_trace_environment * yes yes * yes yes yes mi_sample_light * yes * ** ** yes ** mi_trace_shadow ** ** ** yes ** no yes

*
yes, if the shader generates an artificial intersection point yes, if the shader generates an artificial intersection point by setting point, normal, and normal_geom in the state.

**
yes, if the shader removes RC's internal ray tree data yes, if the shader removes RC's internal ray tree data by setting cache in the state to NULL and generates an artifical intersection point if not present.


Converting 1.8 to 1.9

The shader interface of the previous generation of mental ray, 1.8, differed from version 1.9 in various ways. When converting shaders from 1.8 to 1.9, the following changes should be made:

  • Matrices are now in column-major order, so that the translation is in the last row. Consequently, the point and vector transform functions multiply vectors from the left; the argument order of these functions has changed. The matrix layout is the only change that affects the .mi file.

  • All shaders should return a miBoolean that indicates whether the shader succeeded. Shaders that used to be void should now return miTRUE. Texture shaders no longer need to return an alpha of -1. If shaders neglect to return either miTRUE or miFALE, some functions such as mi_lookup_color_texture that pass shader return values back to their callers may return undefined values.

  • Since mental ray 2.0 will available as a library that must co-exist with the symbol namespace of a client application, and since version 1.9 is compatible with version 2.0, all data types, constants, and internal functions have been prefixed with mi_ to allow simple upgrades. In some cases, capitalization changed also.

  • Most floating-point numbers now have the type miScalar, which is currently a double. Special versions of mental ray may be made available that use float instead, for high-speed applications with low demands on precision. Shaders should not rely on the actual precision of miScalar.

  • The scene database is linked using miTag references, replacing pointers to allow virtual shared databases. Shaders that directly access scene database elements must now use access and unpinning functions. In general, this allows complete traversal of the entire scene database, but care should be taken because certain data structures such as instances may be changed in future versions of mental ray.

  • The state has changed. Constants have been moved out to option and camera structures. Some variables have been added or removed. Thread numbers are now available by function call. A boolean type is used for flags, and enumerators have been introduced.

  • Arrays must now be accessed by adding a constant, prefixed by i_ , to the array index. User parameter structures no longer contain pointers, to allow network transparency.

  • Material shaders should now sample lights using the mi_sample_light functions instead of mi_trace_light. An extra loop is also required, controlled by the -mc or -qmc command-line options. This allows greater precision when sampling area light sources.
  • Error messages have been consolidated into a number of standard functions that can be turned on and off externally, much like mental ray's own messages.

(see shader initialization) (see shader cleanup) (see memory leaks)
  • Initialization and termination of shaders using separate _init and _exit functions for each shader are now supported. It is not recommended to use a static variable to self-initialize a shader when it is called for the first time. Shaders can no longer assume that all allocated memory is released when mental ray has finished the animation; since mental ray may now be part of a client application instead of being forked as a separate process, data may have a very long lifetime. Memory leaks should be avoided.



Table of Contents