The output of Predict Engine is a RAW image, similar to the one you could get by taking a photo with an actual camera. In order to display this image on screen, a post-process must be carried out, mostly in order to adjust the image dynamic range. This post-process is defined in the Post Pipeline section of the UVR Camera Settings component.
The post pipeline consists of the following main steps :
The post pipeline can be edited at runtime via the Interactive Settings interface.
When defining Still Cameras, white balance may be required. You can define the color that will be used as the reference white in the balance.
In the renderer view interface, an additional button next to the White Reference field enables you to pick a reference value directly in the Predict Engine render. Using this picker, the selected color will be the RGB value in the Predict Engine render before every post process (white balance included) : this option is especially useful if the current white reference is not white.
The white reference can be edited at runtime.
Path tracing can induce noise in the generated image. The longer an image takes to compute, the less noise there is. To reduce the noise while the image is rendering, a denoiser can be used.
The denoiser can be more or less agressive : its aggressivity is defined in the performance settings of Predict Engine.
The denoiser can be enabled/disabled at runtime.
When using the OIDN denoiser (default), Predict Engine must be installed at a path without any special character.
The OIDN denoiser is not available on GPU devices of type "Pascal".
Images produced by Predict Engine are defined within a dynamic range that cannot be displayed on the screen : values on screen must be contained within [0;255] whereas values in Predict Engine can go up to thousands or be contained in a much smaller interval. In order to adapt this dynamic range, we use a Tone Mapper.
The tone mapper used in Predict Engine is linear : a simple multiplicative factor is applied to the image. This factor is called the Exposure Value.
The Exposure Value can be chosen manually or it can be computed automatically (when the Auto Exposure field is enabled). When the auto exposure mode is enabled, the exposure value is computed so that the average of a zone at the center of the image is around 128 (half the maximum value). The EV +/- field enables you to make the image darker or brighter by changing this expected value.
When the auto exposure is enabled, 3 additional buttons appear next to the Auto Exposure field :
The pen button on the left defines whether the zone used to compute the average is circled on the image or not,
The pipette button on the right enables you to pick a zone on the image that will be use to compute the average (this option is only available when using the Engine view),
The square button in the middle reset the zone used to compute the average to the center of the image.
Scene previewed using different auto exposure zones, the zone used to compute the exposure value is placed on the ceiling light (left) or on the red sphere (right)
The tone mapper settings can be edited at run time.
If the Expert mode is enabled, you can choose the type of the Tone Mapper that will be used. The default value is Linear (only available option when the Expert mode is not enabled) : see section above.
You can also define a Photographic Tone Mapper that simulate the behaviour of an actual camera. The Photographic Tone Mapper is defined as follow :
Exposure : the exposure time in seconds (can be edited at run time and automatized using the Auto Exposure option),
Aperture : the aperture of the optical system, the higher the aperture the more light enters the system (can be edited at run time),
ISO : the sensitivity of the sensor, the higher the sensitivity the less light is needed to expose the sensor (can be edited at run time),
Gain RGB : a multiplicative factor applied to the final output,
Bit Depth : the number of bits each pixel is stored on,
Pixel Offset : the minimum value taken by a pixel, the value should be in the interval [0; 2^BitDepth - 1],
Pixel Saturation : the maximum value taken by a pixel, the value should be in the interval [0; 2^BitDepth - 1].
If any of the following values {exposure time, ISO, Gain RGB} are set to 0, the output will be a completely black image.
Fireflies are rendering artifacts resulting from numerical instabilities in solving the rendering equation. They manifest themselves as anomalously-bright single pixels scattered over parts of the image.
To limit the impact of these incoherences, we replace abnormal values by the mean of their neighbours. Values are identified as abnormals when they are higher than : [mean of the neighbours + threshold * neighbours standard deviation]. If the Expert Mode is enabled, you can define this threshold.
When using a False Color Color System, the raw output image from Predict Engine is visualized as a representation of the measured physical quantity in false colors.
Spectroradiometer sensors output a value in W/m²/sr. You can define which channel is displayed.
If the Auto Range is enabled, the color map will represent all the values in the image. If it is disabled, a "Range" field enables you to define the range of values (in %) that should be represented. This enables you to ignore the lowest/highest values in the image that may correspond to fireflies/noise or saturated zones with direct illumination.
You can choose to display the raw value or the logarithm of the value.
You can choose to show the scale on screen or not.
Polarimeter sensors are necessarily in false colors as the raw output image from Predict Engine cannot be interpreted as an RGB visualisation of the scene. Six visualization modes are available for the polarization in addition to the four Stokes components :
The Degree mode displays the degree of polarization in the range [0;1] using a red scale colormap (see Figure B bellow),
The Orientation mode displays the orientation of the polarized light, in the range [0;180] degrees, using a Rainbow colormap (see Figure D bellow),
The Ellipticity mode displays the ellipticity of the circularly polarized light, in the range [-45;45] degrees : an ellipticity of +45° (right) is represented by the red color, an ellipticity of -45° (left) is represented by the blue color, and an ellipticity of 0° is represented by the black color (see Figure F bellow),
The Type mode displays the type of the polarization : linear polarization is represented by the cyan color and circular polarization is represented by the yellow color (see Figure C bellow),
The Chirality mode displays the chirality of the circularly polarized light (S3 component of the Stokes vector) : a right chirality is represented by the blue color and a left chirality is represented by the yellow color (see Figure E bellow),
The Plane mode displays the orientation of the linearly polarized light (circularly or elliptically polarized light will be ignored), in [0;180] degrees, using a red/green colormap for the S1 Stokes component and a blue/yellow colormap for the S2 Stokes component (see Figure G bellow),
The S0 mode display the raw S0 component of the Stokes vector,
The S1 Reduced, S2 Reduced, and S3 Reduced modes display the reduced components of the Stokes vector : the raw component divided by S0.
More details on the visualisation modes of the polarization :
Wilkie, A. and Weidlich, A. (2010). A standardised polarisation visualisation for images. In Proceedings of the 26th Spring Conference on Computer Graphics, pages 43–50. ACM.
The false color settings can be edited at run time.
Example scene defined with three spheres (one red diffuse, one metallic and one glass), a mirror in the background, and two polarizers on the front (linear polarizer on the left, circular polarizer on the right), lit by a D65 ambient light.
Figure A : scene rendered with a still camera sensor
Figure B : scene rendered in the Degree mode.
The reflection/transmission on the mirror and the dielectric spheres induce linear polarization.
The light going through the linear/circular polarizer is purely linearly/circularly polarized.
The diffuse material is not inducing polarization.
Figure C : scene rendered in the Type mode
The reflection/transmission on the mirror and the dielectric spheres induce linear polarization.
The light going through the linear/circular polarizer is purely linearly/circularly polarized.
The diffuse material is not inducing polarization.
Figure D : scene rendered in the Orientation mode.
The reflection/transmission on the mirror and the dielectric spheres induce linear polarization.
The light going through the linear polarizer is purely linearly polarized in a single direction.
Figure E : scene rendered in the Chirality mode.
The light going through the circular polarizer is purely circularly polarized with one chirality.
Figure F : scene rendered in the Ellipticity mode.
The light going through the circular polarizer is purely circularly polarized with one chirality.
Figure G : scene rendered in the Plane mode
The reflection/transmission on the mirror and the dielectric spheres induce linear polarization.
The light going through the linear polarizer is purely linearly polarized in a single direction.
Figure H : scene rendered in the S0 mode.
Figure I : scene rendered in the S1 Reduced mode.
Figure J : scene rendered in the S2 Reduced mode.
Figure K : scene rendered in the S3 Reduced mode
When using a custom sensor, you can define manually the output of the Predict Engine renderer. The type of the color system impacts the definition of the Post Pipeline. Three types of color systems are available :
RGB : the output is defined by the three given XYZ channels, the raw image is then converted to LDR RGB using a given gamut, a gamma correction is applied.
If the photometric preset is used on the sensor, the channels can be defined by the default Photometric XYZ channels.
Otherwise, the channels are selected among the available layers (see section bellow) and channels defined on the sensor.
Raw : the output is defined by the three given RGB channels, the raw image is only processed with a gamma correction.
If the photographic preset is used on the sensor, the channels can be defined by the default Photographic RGB channels.
Otherwise, the channels are selected among the available layers (see section bellow) and channels defined on the sensor.
False Color : one of the sensor channels is visualized as a representation of the measured physical quantity in false colors. It is also possible to display :
the normals/tangents/bi-tangents of the scene : each vector (x,y,z) is represented as a color (r,g,b),
the material : each ID is represented by a color, the color are regularly selected in the given colormap,
the depth : the distance between the camera and the elements in the scene is displayed using the given colormap.