Functionality

mental ray offers all the features traditionally expected of photorealistic rendering, together with functionality not found in most rendering software. The following sections describe parallelism, free-form surface input geometry, edge merging and adjacency detection and various input entities such as materials, texture mapping and light sources.

Parallelism

mental ray has been designed to take full advantage of parallel hardware. On multiprocessor machines that provide the necessary facilities, it automatically exploits thread parallelism where multiple threads of execution access shared memory. No user intervention is required to take advantage of this type of parallelism. mental ray is also capable of exploiting thread and process level parallelism where multiple threads or processes cooperate in the rendering of a single image but do not share memory. This is done using a distributed shared database that provides demand-driven transparent sharing of database items on multiple systems.The parallel rendering technology which is required for the support of distributed shared databases has been developed by mental images as part of the ESPRIT Project 6173 Design by Simulation and Rendering on Parallel Architectures (DESIRE). See [Herken 94]. This allows parallel execution across a network of computers, and on multiprocessor machines which do not support thread parallelism.

A queue of tasks to be executed in parallel is generated by subdividing the screen space. Each task consists of a rectangular portion of the screen to be rendered. A rendering process, whether on the machine where mental ray was started or on some remote host, requests tasks from this queue and renders the corresponding portion of the image. Faster processes will request and complete more tasks than slower processes during the course of rendering an image, thus balancing the load.

The host that reads or translates the scene, or runs client application such as a front-end application software that mental ray is integrated in, is called the master host. The master host is responsible for connecting to all other hosts, called slave hosts. A slave host may also act as master host if an independent copy of mental ray is used by another user; systems do not become unavailable for other jobs if used as slaves. However, running a mental ray slave on a host may degrade the performance of independent interactive application programs such as modelers on that host significantly.

The list of hosts to connect to is stored in the .rayhosts file. This file is searched for by the client first in the current directory, then in the home directory. Each line contains one host name. The first line that contains the name of the host the client runs on is ignored; this allows all hosts on the network to share a single .rayhosts file, each ignoring the first reference to itself. Only masters ever access the host list. If the -hosts option is given to mental ray, the .rayhosts file is ignored, and the hosts are taken from the command line. In this case, no hosts are ignored. The library version of mental ray may get its host list directly from the application.

Two environment variables control the transmission of scene data to remote hosts:

  • $MI_RAY_LOAD_NUM specifies the maximum number of servers concurrently loaded. The default is 10. This prevents mental ray from attempting to load too many servers at the same time, which might overload the limited bandwidth of the network and exceed the time limit. For fast networks such as 100-Mbit FDDI rings or 155 MBit/s ATM switches, this number can be increased; for heavily loaded 10-Mbit Ethernet networks it may be necessary to decrease the number. Virtual memory shortage during downloading can be prevented by adding virtual or real swap space or by decreasing $MI_RAY_LOAD_NUM.

  • $MI_RAY_LOAD_TIME specifies the timeout period for loading a server, in seconds. The default is 900 (15 minutes). If the load process for any server exceeds this limit, the server is excluded from rendering. For large scenes, or for networks that are heavily loaded sporadically, it may be necessary to increase this number to give enough time to every server to load the scene without timing out. For networks that have a consistently low bandwidth, decreasing $MI_RAY_LOAD_NUM is recommended.

Environment variables are set at a shell prompt with a command like

     setenv MI_RAY_LOAD_NUM 5
     MI_RAY_LOAD_NUM=5; export MI_RAY_LOAD_NUM

The first line applies to C shells (csh or tcsh); the second line applies to Bourne shells (sh or bash). These commands must be given before starting mental ray on the command line. If mental ray is started from the SOFTIMAGE Creative Environment, the environment variables must be set before starting Creative Environment.

Free-Form Surfaces

mental ray supports free-form curves and surfaces in non-uniform rational B-spline (NURB), Bézier, Taylor (monomial), cardinal or basis matrix form. Any of these forms may be rational and may be of degree up to twenty-one.Although the user-settable degree is currently limited to 21, mental ray has no inherent limit. Surfaces may be trimmed.

Internally, free-form surfaces are triangulated (approximated) before rendering. A variety of approximation techniques is available, including uniform parametric, uniform spatial and curvature dependent methods.

The uniform parametric technique (referred to in the input file as parametric) subdivides the surface at equal intervals in parameter space. The input file specifies a factor which is multiplied by the surface degree to obtain the number of subdivisions in each parametric direction per patch.

The uniform spatial technique (spatial in the input file) subdivides the surface at equal intervals in camera space --- or, rather, the intervals will never exceed the given upper bound. Optionally, this bound may be specified in raster space (in units of pixel diagonals) rather than camera space. If, for example, one wanted to approximate a surface with sub-pixel size triangles, one could use the uniform spatial approximation technique with a raster space accuracy of 0.5 pixel diagonals. Note that the apparent size of a subdivided portion of a surface is computed as if the surface was parallel to the screen. Thus, the triangulation does not become more coarse towards the edge of the object's silhouette. This has the advantage that the object will be well approximated even if seen in a mirror from a different angle, but such a definition can also result in what is often an overly fine triangulation.

Camera-dependent subdivision may mean that objects that are instanced more than once must be triangulated in multiple ways. A tradeoff between the additional memory required to store multiple objects, and the reduced total number of instanced triangles must be evaluated to achieve optimal speed. Camera dependency works best if it is used for objects that are not instanced too many times.

The curvature dependent technique (known as curvature in the input file), subdivides a surface until two approximation criteria are satisfied simultaneously. The first is an upper bound on the maximum distance in camera space between the actual surface and its polygonal approximation (known as the distance tolerance). The second is an upper bound on the maximum angle (in degrees) between any two normals on a subdivided portion of the surface (known as the angle tolerance). Note that the first criterion is scale dependent while the second is scale independent. That is, one must know the size of the object in order to choose a suitable tolerance in the first case but not the second. In spite of this apparent advantage of the angle criterion over the distance criterion, the angle criterion has the undesirable property of resolving small discontinuities ad infinitum, whereas the distance criterion will not resolve features whose scale is below the given tolerance. Either criterion can be disabled by setting the corresponding tolerance to zero. The distance criterion may be optionally given in raster space, again in units of pixel diagonals.

Both the uniform spatial and curvature dependent approximation techniques use a recursive subdivision process that can also be controlled by two additional parameters, specifying the minimum and maximum number of recursion levels. The subdivision can be forced to proceed at least as far as the given minimum level, and refinement can be halted at the maximum level.

All subdivisions of a free-form surface begin at the patch level. If, for example, a plane is modeled with ten by ten patches it will be approximated by at least two hundred triangles, although two triangles might be adequate. If mental ray seems to be producing a large number of triangles in spite of a low approximation accuracy, this is often due to the selected patch subdivision algorithm.

The curvature dependent approximation technique with the distance tolerance given in raster space and the angle tolerance set to zero has proved to be the most useful technique for high quality rendering.

For a quick rendering to examine materials or object positions, the uniform parametric technique may be used with a factor of zero.

Free-form curves (trimming curves) may also be approximated by any of the above described methods using a technique and tolerances which are distinct from those of the surface which the curve trims. The definitions are essentially the same if one considers a curve segment to correspond to a surface patch. An important difference is that the uniform spatial and curvature dependent approximation techniques will coalesce curve segments if possible. A straight line consisting of one hundred co-linear segments may be approximated by a single line segment.

Edge Merging and Adjacency Detection

Surfaces are generally approximated independently of each other and small cracks may be visible between them, especially if the approximation is coarse. It may be desirable to use a smaller tolerance for the trimming curves than for the surfaces themselves. If an object is well-modeled, if surfaces meet smoothly along their trimming curves and if the curves are approximated to a high accuracy, the gaps between surfaces become invisible. The ideal solution, however, is to triangulate surfaces consistently along shared edges.

mental ray provides the connect construct for specifying connectivity between surfaces. The two surfaces are named, along with the two trimming curves and the parameter ranges along which they meet. Along such a connection the surfaces will be triangulated consistently resulting in a seamless join.

If the system generating the input for mental ray cannot determine such connectivity, adjacency detection may be used to discover connectivity automatically. One may give a merge epsilon within a geometry group which will cause all surfaces in that group to be examined. If any two surfaces approach each other along a trimming curve (or the surface boundary, if the surface is not trimmed) to within the given epsilon, they will be considered adjacent and an appropriate connection will be generated.

Essential to the fast and correct determination of adjacency is the gathering of surfaces into appropriate groups. Obviously, the door of a car should not be considered connected to the body no matter how close the two surfaces approach each other. Moreover, the larger the groups, the more time will be required for adjacency detection.

Special Points and Curves

Special points (see special point) and curves (see special curve) force the triangulation of a free-form surface to include specific features. A special point is given in the parameter space of the surface and will be included as a corresponding vertex in the triangulation. A special curve is similar to a trimming curve but does not cause portions of the surface to be trimmed. Rather, the curve is included as a polyline in the triangulation of the surface. Special curves are useful for introducing flexibility in the triangulation along specific features. For example, if letters were to be embossed on a planar surface using displacement mapping, a series of contour curves around the letters could be created with special curves.

Atmospheres and Volumes

The medium which surrounds all objects in a scene is known as the atmosphere. This is normally a transparent material with a refractive index of 1.0. A procedural atmosphere can be specified by naming a volume shader that determines the attenuation of light as it travels along a ray of a given length through the atmosphere. As with all other types of shaders, a user-written shader can be used in place of the built-in volume shader. This capability can be used, for example, to create procedurally defined fog.

Materials

A material determines the response of a surface to illumination. Materials in mental ray consist of a material name and one mandatory and four optional shaders, each of which can be a built-in shader, or a user-provided C function:

  • The first function is the material shader itself. It may not be omitted. The material shader determines the color of a point on an object, based on its parameters which may include object colors, textures, lists of light sources, and other arbitrary parameters.

  • An optional displacement shader can be named that displaces a free-form surface at each point in the direction of the local surface normal. Displacement maps only affect the triangles resulting from the tessellation of free-form surfaces.

  • An optional shadow shader determines the way shadow rays pass through the object. This can be used for calculating colored shadows.

  • An optional volume shader controls rays passing through the inside of the object. This is functionally equivalent to atmosphere calculations, but takes place inside objects, not outside.

  • An optional environment shader provides an environment map for non-raytraced reflections.

The shading function may be either a user written function linked at run time, or it may be one of the built-in functions. All built-in shaders use certain standard parameters that are described here. Parameters can be named in any order. Parameters can also be omitted; default values will be provided by mental ray. Note that the following standard parameters only apply to the built-in shaders, a user-written shader is completely free to define these or other parameters.

The index of refraction controls the bending of light as it passes through a transparent object. Although actually dependent on the ratio of indices between the transparent material being entered and that being left, in practice one may say that the higher the index of refraction, the more the light is bent. Typical values are 1.0 for air, 1.33 for water and 1.5 for glass.

The shinyness material parameter effectively controls the size of highlights on a surface. It is also known as the specular exponent. The higher the value, the smaller the highlight.

The dissolve parameter controls the fading transparency of a material independent of refractive effects. This is more accurately described as a blending operation between the surface and whatever lies beyond. If the transparency is 0.0, the surface is completely opaque. A value of 0.5 would cause an equal blend of the surface and the background. A value of 1.0 would result in an invisible surface. This parameter is used by the Wavefront-compatible shaders only.

The transparency parameter controls the refractive transparency of a material. Unlike dissolve, this parameter has a physically correct interpretation. The range is, as for transparency, from 0.0 for opaque to 1.0 for a completely transparent surface. The interpretation of transparency is left entirely to the material shader.

The reflect parameter controls the reflectivity of a material. If reflect is 0.0 no reflections would be visible on a surface. A perfect mirror would have a reflect of 1.0. This parameter is used by the SOFTIMAGE-compatible shader only.

The ambient component approximates the color of a light source which illuminates a surface from all directions without attenuation or shadowing.

The diffuse component is the color of the surface which is dependent on its angle to a light source but independent of the position of the viewer. A piece of felt is an example of a material with only a diffuse component.

The specular component is the color of the surface which is dependent both on the position of the light source and the position of the viewer. It is the color of highlights on the surface.

The transmit component (transmission filter) is a color which filters light refracted through an object. A piece of glass which imparts a green tint to the objects seen through it would have a green transmit component. This parameter is used by the Wavefront shader only.

Finally, the shade component (shadow filter) is a color which filters light as it passes through a transparent object which casts a shadow. A blue glass ball would have a blue shade component. This parameter is also used by the Wavefront shader only.

These parameters have been referred to as standard because they are each required by at least one of the built-in shaders (see material shader) . There is one material shader that supports SOFTIMAGE compatibility and one that supports Wavefront compatibility. Additional shaders compatible with Alias lighting models become available with the Alias translator module of mental ray.Currently available only as a standalone converter, sdltomi. Refer to [WFMI 93] and [SDMI 93] for details.

Light Sources

A light source (see light) illuminates the objects in a scene. Light sources in mental ray are programmable and consist of a light source name, a named light shader function, and an optional origin and direction (exactly one of the two must be present). All light shaders also accept shader parameters that depend on the shader. All built-in shaders require a light color parameter.

The lights available to a scene are defined outside materials and referenced by name inside materials. Only those lights which a material references will illuminate surfaces which use that material.

The shading function for light sources may be either a user written function linked at run time, or it may be one of the built-in functions. There is one built-in shader for SOFTIMAGE compatibility, and one for Wavefront compatibility.

The shading functions for all SOFTIMAGE shaders accept a boolean parameter shadow that turns shadow casting on or off for that light source, and a floating point factor that is the shadow factor. The shadow factor controls penetration of opaque objects.

The mi_soft_light shader has a mode parameter that selects an infinite (directional) light (mode 0), a point light (mode 1), or a spot light (mode 2). The infinite shader is a directional light source requiring a direction in the input file. The shading function requires only the shadow and factor parameters. A point light source requires an origin in the input file. The shading function accepts, in addition to the color, shadow, and factor parameters, a boolean atten that turns distance attenuation on or off, and two scalar parameters start and end that specify the range over which the attenuation falls off if atten is true. The spot light mode requires only an origin in the input file. The spot direction is considered directional attenuation, and is given as a user parameter. The shading function takes the same parameters as the point light mode, and two cone angles cone and spread that specify the angle of the inner solid cone and the outer falloff cone, respectively. The spot casts a cone of light with a softened edge where the intensity falls off to zero between the cone and spread angles.

The mi_wave_light shader accepts color and a dir (direction) arguments. Shadow casting cannot be turned on and off on a per-light-source basis with Wavefront light sources, and the shading function accepts no shadow factor. There are two types of attenuation, distance and angle. Distance attenuation is turned on by either one of the two boolean flags dist_linear or dist_inverse. In the linear case, the fading range is controlled by dist_start and dist_end; in the inverse-power case, the attenuation is proportional to the distance from the illuminated point raised to the dist_power argument.

Wavefront angle attenuation is turned on by either one of the two boolean flags angle_linear or angle_cosine. In the linear case, the light falls off between the angles specified by the angle_inner and angle_outer arguments; in the cosine case, the light falls off proportionally to the cosine raised to the angle_power argument. Angle attenuation implements spotlights. The spot light direction is the illumination direction argument, dir.

Area Light Sources

The main purpose of area light sources is to generate more realistic lighting, resulting in soft shadows. This is achieved by using one of three primitives, rectangles, discs, and spheres, as light sources with nonzero area. This means that a point on an object may be illuminated by only a part of a light source. Area light sources are based on similar principles as motion blurring, and, like motion blur, may reduce rendering speed.

Area light sources are specified in the .mi file by naming a primitive in a standard light definition. Any of the standard spot and point lights can be turned into an area light source. The orientation of the disc and rectangle primitives may be chosen independently of the light direction of spot and directional light sources. Any type of light shading function (see light shader) can be used.

Texture, Bump, Displacement, and Reflection Mapping

mental ray supports texture, bump, displacement and reflection mapping, all of which may be procedurally defined using user-supplied functions which are linked to mental ray at run time without user intervention. A function may require parameters which could specify, for example, the turbulence of a procedural marble texture. More traditionally, a texture may be derived from an image file. Frequently, a function is used to apply texture coordinate transformations such as scaling, cropping, and repetitions. Such a function would have a sub-texture argument that refers to the actual image file texture.

A user-defined material shader is not restricted to the above applications for textures, it is free to evaluate any texture and any number of textures for a given point, and use the result for any purpose.

In the parameter list of the built-in material shaders, a list of texture maps may be given in addition to, for example, a literal RGB value for the diffuse component of a material. The color of the diffuse component will then vary across a surface. To shade a given point on a surface, the coordinates in texture space are first determined for the point. The diffuse color used for shading calculations is then the value of the texture map at these coordinates. The built-in SOFTIMAGE-compatible material shader uses a different approach; it accepts a single list of textures, with parameters attached to each texture that control the way the texture is applied to ambient, diffuse, and other parameters. The shader interface is extremely flexible and permits user-defined shaders to use either of these approaches, or completely different formats. The remainder of this section describes the built-in shader parameters only.

The built-in material shaders support texture mapping for all standard material parameters except the index of refraction. Shinyness, transparency, refraction transparency, and reflectivity are scalar values and may be mapped by a scalar map. Bump maps require a vector map. For all other parameters, a color map is appropriate. SOFTIMAGE texture shaders derive all types of maps from color textures.

Determining the texture coordinates of a point on a surface to be shaded requires defining a mapping from points in camera space to points in texture space. Such a mapping is itself referred to as a texture space for the surface. Multiple texture spaces may be specified for a surface. If the geometry is a polygon, a texture space is created by associating texture vertices with the geometric vertices. If the geometry is a free-form surface, a texture space is created by associating a texture surface with the surface. A texture surface is a free-form surface which defines the mapping from the natural surface parameter space to texture space. Texture maps, and therefore texture spaces and texture vertices, may be one, two, or three dimensional.

A procedural texture is free to use the texture space in any way it wants, but texture files are always defined to have unit size and to be repeated through all of texture space. That is, the lower-left corner of the file maps to (0.0, 0.0) in texture space, and again to (1.0, 0.0), (2.0, 0.0), and so on; the lower-right corner maps to (1.0, 0.0), (2.0, 0.0), ... and the upper right to (1.0, 1.0), (2.0, 2.0), ....

Just as a texture map can vary a parameter such as the diffuse color over every point on a surface, a bump map can be associated with a material, perturbing the normal at every point on a surface which uses the material. This will affect the shading, though not the geometry, giving the illusion of a pattern being embossed on the surface.

Bump maps, like texture maps, require a texture space. In addition, bump maps require a pair of basis vectors to define the coordinate system in which the normal is displaced. A bump map defines a scalar x and a scalar y displacement over the texture space. These are each multiplied by the respective basis vector and the sum of the scaled vectors is added to the normal vector. The basis vectors are automatically defined for free-form surfaces in a way which conforms to the texture space. For polygons, the basis vectors must be explicitly given along with the texture coordinates for every polygon vertex.

A displacement map is a scalar map which is used to displace a free-form surface at each point in the direction of the local surface normal. Displacement maps only affect free-form surfaces. Displacement maps on polygonal objects will be available in version 2.0.Like texture, bump and reflection maps, a displacement map may be either a file or a user-defined function, or a combination of the two.

The surface must be triangulated fine enough to reveal the details of the displacement map. In general, the triangles must be smaller than the smallest feature of the displacement map which is to be resolved. However, this is not done automatically. The curvature dependent free-form surface approximation technique, for example, does not take into account variations in curvature imparted to the surface as a result of displacement mapping. Probably the most suitable approximation technique for use with displacement mapping is the camera dependent uniform spatial subdivision technique, which allows specification of triangle size in raster space. Another alternative is to place special curves on the surface which follow the contours or isolines of the displacement map, thus creating flexibility in the surface tessellation at those points where it is most needed for displacement.

The final type of map which may be associated with a material is an environment map. This is a color-mapped virtual sphere of infinite radius which surrounds any object referencing the given material. ``Environment map'' is actually something of a misnomer since this sphere is also seen by refracted rays; the environment seen by first-generation (primary) rays can also be specified but is part of the view, not of any particular material. In general, if a ray does not intersect any objects, or if casting such a ray would exceed the trace depth, the ray is considered to strike the sphere of the environment map of the last material visited, or the view environment map in the case of first-generation rays that did not hit any material.

The environment map always covers the entire sphere exactly once. The sphere may be rotated but, because it is of infinite radius, translations and scalings have no effect. User-defined environment shaders can be written, for example one that defines a six-sided cube or other types of environment mapping.

User-Defined Shaders

In addition to built-in shaders, user-defined shaders written in standard C can be precompiled and linked at runtime, or can be both compiled and linked at runtime. User-defined shaders can be used in place of any built-in shader, redefining materials, textures, lights, environments, volumes, displacements etc.

mental ray can link in user-defined shaders in either object, source, or dynamic shared object (DSO) form.

Every user-defined shader must be declared before it can be used. (see shader declaration) A declaration is a statement that names the shader, and lists the name and type of all its parameters. Declarations may appear in the .mi file, but are typically stored in an external file included at run time. Note that all code and link statements must precede the first declaration in the .mi file.

Available parameter types (see shader parameter types) are boolean, integer, scalar, string, color (RGBA), vector, transform ( 4*4 matrix), scalar texture, color texture, vector texture, and light. In addition to these primitive types, compound types may be built using struct and array declarations. Structs and arrays may be nested, with the restriction that arrays of arrays are not legal and must be emulated using arrays of structs containing arrays.

An instance of a shader can be created by creating a material, texture, light etc. that names a declared shader and associates a parameter list with values with it. Any parameter name that appeared in the declaration can be omitted or listed in any order, followed by a value that depends on the parameter type. Scalars accept floating point numbers, vectors accept one to three floating point numbers, and textures accept a texture name.

After a material, texture, light etc has been created, it can be used. Materials are used by giving its name in object geometry statements, and textures and lights are used by naming them as parameter values in other shaders, typically material shaders.

When the C function that implements a user-defined shader is called, it receives three pointers: one points to the result, one to global state, and one to a data structure that contains the parameter values. mental ray stores the parameter values in that data structure using a layout that corresponds exactly to the layout a C compiler would create, so that the C function can access parameters simply by dereferencing the pointer and accessing the data structure members by name. For this, it is necessary that a struct declaration is available in C syntax that corresponds exactly to the declaration in .mi syntax.

For details on user-defined shaders, refer to the ``Writing Shaders'' chapter.

The Camera

The camera is fixed at the origin, looking down the negative Z axis, with up being the positive Y axis. To view a scene from a given position and orientation, the scene must be transformed such that the camera is at this standard location.

By default, the camera is a pin-hole perspective camera for which the focal length, aperture and aspect ratio may be specified in either the view construct of the input file or on the command line of mental ray. Optionally, lens effects such as depth of field can be achieved by specifying one or more lens shaders.

Lens Effects

Lens effects are distortions of the rendered image achieved by changing the light path through the camera lens. Because lens effects are applied to first-generation rays, there is no loss of quality that would be unavoidable if the distortion were applied in a post-processing stage after rendering.

Lens effects are introduced by specifying one or more lens shaders in the view statement. If no lens shaders are present, the standard pinhole camera is used. Each lens shader is called with two state variables that specify the ray origin and the ray direction. The lens shader calculates a new origin and a new direction, and casts an eye ray using the mi_trace_eye function. The first lens shader always gets the position of the pinhole camera. All following lens shaders get the origin and direction that the previous lens shader used when calling mi_trace_eye.

Lens shaders imply ray tracing; if there is no scanline statement, scanline rendering is turned off. If scanline was turned on explicitly, lens shaders are disregarded and a warning is printed. Lens shaders have no effect on the trace depth limit, eye rays are not counted towards the ray trace depth.

Depth of Field

Depth of field is an effect that simulates a plane of maximum sharpness, and blurs objects closer or more distant than this plane. There are two methods for implementing depth of field: a lens shader can be used that takes multiple samples using different paths to reach the same point on the focus plane to interpolate the depth effect; or a combination of a volume shader and an output shader that collect depth information during rendering and then apply a blurring filter as a postprocessing step over the finished image using this depth information. Both methods are supported by shaders built into mental ray.

Animation

The input file consists of a list of frames, each of which is complete and self-contained. Animation is accomplished by specifying geometry, light sources and materials which change incrementally from one frame to the next.

Motion Blur

Motion blurring is supported using Monte Carlo or proprietary Quasi-Monte Carlo sampling techniques that allow for the correct blurring of highlights, shadows, reflections, refractions and intersecting objects.

The movement of objects is specified by associating linear motion vectors with polygon vertices and surface control points. These vectors give the direction and distance that the vertex or control point moves during one time unit. If a motion vector is not specified, the vertex is assumed to be stationary. Motion blurring computations may be expensive, but note that these computations are only done for those polygons in a scene which include motion information.

A shutter speed may be given for the camera with the -shutter option on the command line or shutter in the view, with the default speed of zero turning motion blurring off. The shutter opens instantaneously at time zero and closes after the shutter speed time has elapsed.

Sampling Algorithms

Motion blurring, area light sources, and the depth-of-field lens shader are based on multiple sampling that is based on varying the sample location in time or 3D space. mental ray offers two fundamental algorithms for these variations:

  • Monte Carlo methods are selected by the command-line option -mc. The variations in time and 3D space are selected randomly with equal distribution in the specified sample space. The random numbers are chosen in a deterministic manner such that an image is precisely repeatable if the same scene is re-rendered with the same options, despite the random nature of the algorithm.

  • Quasi-Monte Carlo methods are selected by the command-line option -qmc. The variations are controlled by a simple analysis of the sample space. Sample locations are chosen on fixed points that ensure optimal coverage of the sample space. The algorithm is similar to fixed-raster algorithms, but avoids the regular lattice appearance of such algorithms.

The Monte Carlo algorithm is available in mental ray mainly for historical reasons. It enables mental images' implementation of the standard, but for practical purposes highly ineffective, Monte Carlo approach to solving integral equations. Quasi-Monte Carlo methods can be succinctly described as deterministic versions of Monte Carlo methods. Determinism enters in two ways, namely, by working with deterministic points rather than random samples and by the availability of deterministic error bounds instead of the probabilistic Monte Carlo error bounds ([Niederreiter 92]).

Field Rendering

Video images typically consist of two interlaced fields (see field rendering) , effectively doubling the frame rate without requiring any increased video information. The even field consists of the even scanlines of a frame, while the odd field consists of the odd scanlines, where the top-most scanline is said to be odd. These two fields may be identical, but smoother animation may be achieved by ``rendering on fields.'' That is, a single image file contains all the odd lines of one frame and all the even lines of the next (or, optionally, all the even lines of one frame and all the odd lines of the next). This is enabled in mental ray by specifying -field even (or -field odd) on the command line or field even (or field odd) in the view construct of the input file. Image files will be created for every even frame number which contain two interlaced fields. Note that because of the dependence on frame number, animations should start with an odd frame number and end with an even frame.

Color Calculations

All colors in mental ray are given in the RGBA color space and all internal calculations are performed in RGBA. The alpha channel is used to determine transparency; 0 means fully transparent and 1 means fully opaque. mental ray uses premultiplied colors, which means that the R, G, and B components are scaled by A and may not exceed A.

Internally, colors are not restricted to the unit cube in RGB space. As a final step before output, colors are clipped using one of two methods. By default, the red, green and blue values are simply truncated. Optionally, colors may be clipped using a desaturation method which maintains intensity (if possible), but shifts the hue towards the white axis of the cube. Desaturation color clipping may be selected with either the -desaturate on option on the command line or desaturate on in the view. The alpha channel is always truncated.

Output Shaders

mental ray can generate more than one type of image. There are up to four main frame buffers, for RGBA, depth, normal vectors, and labels. The depth, normal vector, and label frame buffers store the Z coordinate and the normal vector and the label of the frontmost object at each pixel of the image. The number and type of frame buffers to be rendered is controlled by output statements. Output statements specify what is to be done with each frame buffer. If a frame buffer is not listed by any output statement, it is not rendered (except for RGBA which always exists). There are two types of output statements, those specifying output shaders and those specifying files to write.

Output shaders are user-written functions that can be linked at runtime that have access to every pixel in all available frame buffers after rendering. They can be used to perform operations like post-filtering or compositing.

Files to write are specified with file format and file name. The file format implies the frame buffer type; there are special file formats for depth, normal-vector, and label files, in addition to a variety of standard color file formats. By listing the appropriate number and type of output statements, it is possible to write multiple files; for example, both a filtered file and the unfiltered version can be written to separate files by listing three output statements: one to write the unfiltered image, one that runs an output shader that does the filtering, and finally another one to write the filtered image. Output statements are executed in sequence.

The following file formats are supported:

    "pic"           SOFTIMAGE 8-bit RGBA picture 
    "rla"           Wavefront 8-bit RGBA picture 
    "rlb"           another form of Wavefront 8-bit RGBA picture 
    "alias"         Alias Research 8-bit RGB picture 
    "rgb"           Silicon Graphics 8-bit RGB picture 
    "ct"            mental images 8-bit RGBA texture 
    "ct16"          mental images 16-bit RGBA texture 
    "ctfp"          mental images floating-point RGBA texture 
    "st"            mental images 8-bit scalar (A) texture 
    "st16"          mental images 16-bit scalar (A) texture 
    "nt"            mental images normal vectors, float triples 
    "zt"            mental images depth (-Z) channel image 
    "tt"            mental images tag channel, unsigned 32-bit ints 
    "qntpal"        Quantel/Abekas YUV-encoded RGB picture, 720x576 
    "qntntsc"       Quantel/Abekas YUV-encoded RGB picture, 720x486 
    "picture"       Dassault Systèmes PICTURE format 
    "Zpic"          SOFTIMAGE depth (-Z) channel image 
    "ps"            PostScript (contour mode only) 
 

Each of these file formats implies a particular data type; for example, "pic" implies 8-bit RGBA, and "zt" implies Z. In the view statement, it is possible to list output statements with explicit data types. The main application for this optional data type is to specify "contour" as data type, which forces contour rendering even if the file type is something like "pic", which would normally force standard color rendering. The result is a color image containing a contour line drawing. The available data types are:

    "rgb"           8-bit RGB color 
    "rgb_16"        16-bit RGB color 
    "rgba"          8-bit RGB color and alpha 
    "rgba_16"       16-bit RGB color and alpha 
    "rgba_fp"       floating-point RGB color and alpha 
    "a"             8-bit alpha channel 
    "a_16"          16-bit alpha channel 
    "vta"           vector texture derived from alpha, 2 floats per pixel 
    "vts"           vector texture derived from intensity, 2 floats per pixel 
    "z"             depth information, 1 float per pixel 
    "n"             normal vectors, 3 floats per pixel (X,Y,Z) 
    "tag"           object labels, 1 unsigned int per pixel 
    "contour"       switch to contour mode and generate contour lines 
 

Note that a label is a 32-bit entity. Some systems define a long to be 64 bits. The difference between "vta" and "vts" is significant only when automatic conversions are done. The file contents are identical.

The floating-point RGBA format "rgba_fp" allows color and alpha values outside the normal range (0, ...1); any conversion to 8-bit or 16-bit formats will clamp values outside this interval. If desaturation is enabled, RGB values are clipped to 1. Dithering is ignored, and alpha is allowed to be less than the maximum of R, G, and B.

All mental images file formats contain a header followed by simple uncompressed image data, pixel by pixel beginning in the lower left corner. Each pixel consists of one to four 8-bit, 16-bit, or 32-bit component values, in RGBA, XYZ, or UV order. The header consists of a magic number byte identifying the format, a null byte, width and height as unsigned shorts, and two unused null bytes reserved for future use. All shorts, integers, and floats are big-endian (most significant byte first).

Memory-mapped Textures

mental ray supports memory mapping of textures in UNIX environments. Memory mapping means that the texture is not loaded into memory, but is accessed directly from disk when a shader accesses it. There is no special keyword or option for this; if a texture is memory-mappable, mental ray will recognize it and memory-map it automatically. Only the map image file format (extension .map) can be mapped. See the previous chapter for a list of supported file formats.

Memory mapping requires several preparation steps:

  • The texture must be converted to .map format using a utility like imf_copy. The scene file must be changed to reference this texture. Note that mental ray recognizes .map textures even if they have an extension other than .map; this can be exploited by continuing to use the old file name with the ``wrong'' extension.

  • Memory-mapped textures are automatically considered local by mental ray, as if the local keyword had been used in the scene file. This means that if the scene is rendered on multiple hosts, each will try to access the given path instead of transferring the texture across the network, which would defeat memory mapping. The given path must be valid on every host participating in the render.

  • The texture should not be on an NFS-mounted file system (one that is imported across the network from another host). Although this simplifies the requirement that the texture must exist on all hosts, the necessary network transfers reduce the effectiveness and can easily make memory-mapping slower than regular textures.

  • Memory-mapping works best if there are extremely large textures containing many tens of megabytes that are sampled infrequently because then most of the large texture file is never loaded into memory.

If the textures and the scene are so large that they do not fit into physical memory, loading a texture is equivalent to loading the file into memory, decompressing it, and copying it out to swap. (The swap is a disk partition that acts as a low-speed extension of the physical memory that exists as RAM chips in the computer). From then on, accessing a texture means accessing the swap. Memory mapping eliminates the read-decompress-write step and accesses the texture from the file system instead of from swap. This has the side effect that less swap space is needed. If the texture and scene are not large and fit into memory, and if the texture is accessed frequently, memory-mapped textures are slower than regular textures because the swap would not have been used.



Table of Contents