Computer Graphics & Geometry

Real-Time Global Illumination for Point Cloud Scenes

R. Preiner and M. Wimmer
Institute of Computer Graphics and Algorithms TU Vienna, Austria



Contents


Abstract

In this paper we present a real-time global illumination approach for illuminating scenes containing large point clouds. Our approach is based on the distribution of Virtual Point Lights (VPLs) in the scene, which are then used for the indirect illumination of the visible surfaces, using Imperfect Shadow Maps for visibility calculation of the VPLs. We are able to render multiple indirect light bounces, where each light bounce accounts for the transport of both the diffuse and the specular fraction of the reflected light.

1. Introduction

Point clouds are a convenient type of geometry representation when huge amounts of geometrical data are to be displayed quickly, or when geometrical data is given in this way (e.g. gathered from a 3D scanning device) and has to be displayed immediately and without a time-consuming triangulation preprocessing step.

There are various approaches for fast global illumination (GI) in scenes containing conventional mesh geometry. In his Instant Radiosity approach [4], Keller introduced a convenient way to approximate the energy radiating from surfaces in the environment by a number of point lights distributed over those surfaces, called Virtual Point Lights (VPLs). But although these VPLs are able to describe indirect light transfer very well, they alone do not solve the visibility problem for indirect illumination.

Ritschel et al [6] proposed Imperfect Shadow Maps (ISMs) as an efficient way for visibility calculation even for a high number of VPLs. ISMs are rendered by splatting sample points from the surfaces in the scene onto low-resolution shadow maps using a parabolic projection that is able to cover the entire hemisphere above a surface point where a VPL is located. In scenes containing mesh geometry, these sample points have to be taken from the model surfaces in a separate sampling pass. Since the sampling of these points can quickly become a performance issue when increasing the amount of geometry data in the scene, it is necessary to perform this task in a preprocessing step to maintain real-time frame rates when rendering. These point samples are normally taken and stored in object space of a model, which allows for correct on-the-fly visibility calculation for dynamic objects performing common affine transformations and movements. However, objects of dynamically changing shape (morphing objects) are not supported in this approach, since this would require a continuous resampling of the surface point coordinates.

In this paper, we show how to apply real-time global illumination to large point cloud scenes, using VPLs for indirect illumination and ISMs for indirect shadow rendering. We directly use the points in our models for ISM rendering, making any sampling pass dispensable, and further allowing correct ISM rendering even for shape-changing point cloud models.

We start with a discussion of the background and related work to our method in Section 2. We proceed with an overview of our algorithm in Section 3 and a detailed description of the steps of its render pipeline in Section 4. Finally, Sections 5 show results of our work and discusses its benefits and limitations.

2. Background and Related Work

Considering a correct simulation of global illumination in a scene, the rendering equation, first introduced by Kajiya [3] in 1986, represents an ideal and complete description of the illumination process:

Lo(p,ωo) = Le(p,ωo) + ∫Ω ρ(p,ωi,ωo)Li(p,ωi)cos θ dωi    (1)

Equation (1) shows a common notation of the rendering equation. The energy radiating from a point p to a direction ωo equals the sum of energy emitted from p in that direction (represented by an emittance term Le(p,ωo)), and the incident energy from the whole hemisphere Ω over p that is reflected in direction ωo. The energy contribution from a given direction ωi is given by the energy incident Li(p,ωi), and the BRDF ρ(p,ωio). θ is the angle between incident light direction and the surface normal at p, and cosθ represents a geometry factor that influences the amount of reflected light based on the incident light direction. Note that for a correct handling of translucent objects ρ, has to be extended to a BSDF and integrated over the whole sphere around p.

Several offline techniques exist, which simulate light propagation as described by the rendering equation, but, since too time-consuming, are inapplicable for real-time rendering. Real-time global illumination is a challenging task and with current hardware can often only be achieved by introducing speed-gaining approximations that result in acceptable images.

Keller introduced Virtual Point Lights (VPLs) in his Instant Radiosity approach in 1997 [4]. Based on a light source, VPLs are seeded over the scene's surfaces where they act as "virtual" light sources for the indirect illumination computation of other surface points in the scene (as illustrated in Figure 1). They represent a sampling of the surface areas radiating reflected energy, and are therefore well suited as approximation of the indirect scene illumination, especially if located on highly diffuse surfaces.


Figure 1: Approximating indirect illumination with Virtual Point Lights. Left: VPLs are seeded by a light source. Right: For illumination computation of a given surface point p, the seeded VPLs are used, considering their visibility.
VPL seeding and indirect illumination

The main issue with the use of many VPLs is maintaining the visibility information for all possible outgoing light directions of each single VPL. In 2008, Ritschel et al [6] proposed Imperfect Shadow Maps (ISMs) as a method for efficiently computing visibility for indirect illumination. The main idea behind ISMs is that it is sufficient to create fast and inaccurate (imperfect) shadow maps, if used for a large number of Virtual Point Lights. With growing number of VPLs used for indirect illumination, visible errors or artifacts from the shadow maps' imperfectness get more and more obliterated.

Our GI algorithm is implemented in an out-of-core point cloud renderer that is able to render enormous point clouds at interactive frame-rates. This renderer uses an algorithm that bases on Wimmer and Scheiblauer's Instant Points approach [7], introduced in 2006. The algorithm uses nested octrees for efficiently building a hierarchy on the point set, significantly reducing the memory-overhead for the data-structure. However, since VPL distribution and illumination computation are both performed in image space, neither the amount of point data nor the layout of the data structure possibly used to organize the points matters for the application of our algorithm. Only the usage of the Imperfect Shadow Maps introduces a certain lower bound on the number of points required for splatting in order to retrieve sufficiently dense shadow maps.

3. Overview of our Approach

In our implementation, the algorithm uses a single spotlight source to illuminate the point cloud scene. The illumination computation is divided into a direct and indirect illumination part. Computing direct illumination is straightforward: The geometry within the spotlight cone is shaded, and shadow-mapping is performed. The indirect illumination part includes all illumination from light rays that do not come directly from the light source, but rather from some surface point where the ray was reflected (bounced). To compute indirect illumination, the bulk of light rays from the light source that are reflected at different surface points and from there are illuminating other parts of a scene, is approximated by a number of VPLs.

Our algorithm incorporates several rendering passes, which are illustrated in Figure 2. Figure 3 illustrates the intermediate buffers produced by the algorithm (G-Buffers, ISMs, accumulation buffer), and their place in the rendering chain.
In the first step, the scene is rendered into two different G-Buffers, a Camera G-Buffer (camera perspective) and a Light G-Buffer (light perspective). Both G-Buffers store all necessary information about the geometry and material properties (depth, normal, color, shininess) of the respectively visible parts of the scene. The Camera G-Buffer is mainly used for image space illumination shading, whereas the Light G-Buffer serves as a convenient basis for easy VPL distribution as well as a light-space depth buffer for direct illumination shadow mapping.

The VPLs are seeded over the directly illuminated area in the scene by an importance-driven distribution over the Light-G-Buffer. The position, surface normal and material properties at the resulting VPL locations are stored in a VPL-Buffer for a consecutive lookup when performing indirect illumination shading. In the next step, we render a big ISM buffer, which containis one small ISM for each VPL. With the VPLs distributed in the scene and their visibility information available in the ISM buffer, the indirect illumination computation can be performed. Based on the Camera G-Buffer, those parts of the scene which are visible to the camera are shaded with respect to the previously seeded VPLs.

When rendering multiple light bounces, the last three steps are repeated. First, the VPLs have to be redistributed (originating from the current VPL positions), then the ISM buffer has to be updated according to the new VPL locations, and finally indirect illumination shading for the new light bounce is performed. To obtain the total indirect scene illumination over several bounces, the indirect illumination values are summed in an own accumulation buffer.

In order to improve speed, we perform interleaved shading on this accumulation buffer, as proposed by Keller and Heidrich [5]. Only an interleaved subset of pixels is shaded per VPL, resulting in fewer computations at the costs of reduced quality. After all indirect illumination shading is finished, the interleaved accumulation buffer is merged to an image of original size. This merged image can show local differences in the shading of the pixels (see Figure 3). Therefore, we apply a geometry-aware filter kernel in order to smooth the resulting indirect illumination image [5]. Finally, direct illumination with shadow-mapping is added, and the result image is tone-mapped in order to obtain an LDR image of the scene.

Figure 2: Overview of the global illumination rendering pipeline.
Algorithm overview

Figure 3: Overview of the workflow of our algorithm and the buffers produced by its rendering pipeline.
Buffer overview

4. GI Rendering Pipeline

4.1. Camera- and Light-G-Buffer

In the first step, the whole scene is rendered from the camera's point of view into a Camera G-Buffer. This G-Buffer stores necessary information of the according geometry or surface at each pixel in image space. This information is distributed over several textures, and consists of the RGB color of the surface, its material properties (diffuse intensity, specular intensity and shininess), the surface normal and the linear depth of the geometry in view space. Since the actual illumination computation is done deferred in image space, the choice of the technique for rendering the points into the G-Buffer is not constrained by the GI algorithm, and thus can be chosen arbitrarily. Although various high-quality point rendering techniques exist, in our implementation, for the sake of performance we have chosen to use simple point sprite splatting with image-space point sizes according to the perspective projected world-space point size.

In the second pass, the whole scene is rendered again, but this time from the spot-light-source's point of view. The scene information rendered in this pass is stored into a second G-Buffer, the Light G-Buffer. This G-Buffer contains the same per-pixel information as the Camera G-Buffer, but further stores an importance value that correlates with the intensity of the surface color and the surface's shininess (specular power), and is needed as importance measure for the VPL distribution in the following step.

4.2. VPL Distribution

The first set of VPLs is distributed on the surface area illuminated directly by the spot light. Since the number of VPLs used significantly affects performance, the VPLs are distributed based on the importance value in the Light G-Buffer to achieve a good approximation of indirect illumination even with fewer VPLs. We start at a pseudo-random distribution of the VPL positions over the spot-light-illuminated area stored in the Light G-Buffer. A 2d-Halton distribution is used in order to achieve a controlled homogeneous distribution, since simple random distributions contain a higher level of noise and locally inhomogeneous areas. Based upon this distribution, we apply a hierarchical warping [2] to relocate the VPL positions to obtain denser VPL distributions where needed, and less VPLs where they do not contribute much to the final scene illumination anyway.

When sampling of the VPL positions is done, all necessary data of the VPLs is stored into a VPL buffer, which is represented by several 1d-textures. This buffer contains the VPL's world space position, the surface normal and material properties of the surface point the VPL is located on. Since both the sampling of the VPL positions and the hierarchical warping can be done in image space of the light source, these values can simply be looked up in the Light G-Buffer. Further, the VPL buffer contains two values indicating the irradiance of the VPL from both its previous VPL (from a previous bounce) and directly from the light source. These values are needed later when calculating the VPL's diffuse reflection of light from both the previous VPL and the light source itself.

4.3. Setup Imperfect Shadow Maps

For realistic rendering of indirect illumination, considering visibility for VPLs is critical for producing correct lighting effects like indirect shadows. We have to know, whether a currently shaded surface point is visible to a VPL or not. If the point is occluded by some other object, light reflected at the VPL doesn't reach this point and no indirect illumination takes place. Such indirect shadows have a high contribution to scene realism. Figure 4 shows a simple scene setup demonstrating shadows cast from an indirectly illuminated object.

To store the necessary visibility information for each VPL, simple shadow-mapping is insufficient, since the area of the scene visible to a VPL covers the whole hemisphere over the VPL's surface point. Therefore, parabolic maps introduced by Brabec et al [1] are used. For each VPL, we create an Imperfect Shadow Map (ISM). To support large numbers of VPLs, we use one huge ISM buffer which contains all ISMs for each VPL in the scene. Figure [3] illustrates a part of a huge ISM buffer (upper right side of the figure). In a single render pass, the points in the scene are passed through a vertex shader that distributes them over the many ISMs in the buffer. Each point is mapped to an ISM by a parabolic projection, and splatted onto the map with a splat size quadratic proportional to the depth of the point.

Figure 4: Indirect shadow realized by Imperfect Shadow Maps. The white sphere and arrow indicate the position and direction of the spot-light source. Only the wall on the right side is directly illuminated, resulting in the centered cube to cast a smooth indirect shadow.
Indirect shadow
Figure 5: Comparison between a raw ISM buffer (left) and an improved ISM buffer after a pull-push operation with 2 iterations (right).
ISM improvement

Due to this point distribution mechanism, each ISM contains only a subset of the total number of scene points, and in fact each point is rendered only to one ISM. This simplification is sufficient for the creation of an ISM, as long as the rendered point cloud models don't consist of too few points and the relative number of VPLs is not too high. This distribution approach makes the time consumed by the ISM setup pass independent of the number of VPLs, which allows for a high number of VPLs at real-time frame rates.

Since ISMs are created by box-splatting a subset of the points in the scene, the resulting parabolic image can contain holes. We can dramatically improve the quality of the ISMs by performing a pull-push closing on the ISM buffer to fill those holes. In our scenes, we found that already a number of 2-3 iterations can achieve good improvements (see Figure 5).

4.4. Accumulating Indirect Illumination

Based upon the visibility information encoded for each VPL in the ISM buffer, indirect illumination shading can be performed. As already mentioned, shading of indirect (and, also of direct) illumination is performed in image space, i.e. only for the pixels in the camera G-Buffer. In general, for each pixel we would have to calculate the transported energy from each single VPL to the surface point corresponding with that pixel. Since this can be very time consuming with a large number of VPLs, interleaved sampling introduced by [5] is used.

In order to correctly render multiple specular bounces, we have to incorporate both diffuse and specular components of light incident when shading specular reflections, considering that the color of specular reflected light deflects over several bounces due to mixed diffuse light. Figure 2 illustrates this concept. To illuminate a surface point P by a VPL, we have to sum the diffuse energy (green arrow incident at P) and the specular energy (blue arrow incident at P) reflected from the VPL.

Both the Lambertian cosine term at the VPL position (diffuse component) and the orientation of the Phong lobe (main reflection direction R, influencing intensity of the specular component) depend on the position of the previous light source. In Figure 2, this is the spot light, at multiple bounces, this is a previous VPL. Thus, for each bounce we need to maintain the set of previous VPLs corresponding to the current VPLs. Further, for each bounce and each VPL, we cache the incoming light from the previous VPL scaled by the cosine of the light incident angle (geometry term) to lookup the diffuse contribution of any incident light ray.


Figure 6: Illustration of specular reflection computation. At each reflection point, N is the surface normal, and R is the direction of max. specular reflection. The incident light reflected at P is the sum of diffuse reflected light (green) and the specular reflected light (blue) at a VPL, resulting in a slightly changed color of the incident light reflected at P.
Specular bounces

4.5. VPL Redistribution

Multiple light bounces are simulated by an importance guided redistribution of the VPLs. For each existing VPL, we choose that scene surface point as new VPL that can contribute the most radiance to the result in the next bounce.

All scene points are passed through a shader that distributes them among the current VPLs and renders a buffer containing the new set of VPLs. For each existing VPL v, to select the best candidate for a new VPL our shader assigns to each incoming candidate point c a z-value in the depth-buffer that corresponds to the VPL quality of c. This quality value depends on whether c is visible to the current VPL v at all, how much energy is radiated from v into the direction of c, and how much energy c itsef is able to reflect. We setup the depth test mode in a way that for each current VPL the GPU depth test automatically leaves those scene points in the render target buffer which are best suited as new VPLs. This buffer is then used as new VPL buffer for the next light bounce.

The VPL redistribution policy described above ensures that for VPLs located on very glossy surfaces, importance is drawn into the main reflection direction, i.e. it is more likely for new VPLs to be located in the direction of where the Phong lobe points to. Since from the second light bounce on, the main reflection direction depends on the position of the previous VPL, as already pointed out in Section 4.4 we always need to store the set of previous VPLs.

Figure 7: Point cloud scene illuminated by a spot-light inside a Cornell Box, rendered with 256 VPLs and (left to right) with 1, 2 and 3 light bounces. Considering a distortion by the tone-mapping operator, we can clearly see that the biggest difference in the light situation is given between the first and the second bounce, while the third bounce only contributes minimal additional illumination information (right).
Multiple Bounces
Figure 8: Frame-rate dependency on the number of light-bounces and the number of VPLs in the scene shown in Figure 7.
Figure 9: Point cloud scene illuminated by a spot-light inside a Cornell Box, rendered with 256 VPLs and (left to right) with 1, 2 and 3 light bounces. Considering a distortion by the tone-mapping operator, we can clearly see that the biggest difference in the light situation is given between the first and the second bounce, while the third bounce only contributes minimal additional illumination information (right).
Figure 10: Comparison of frame-rates with 1, 2 and 3 light bounces using 1x1, 2x2 and 4x4 subdivisions for interleaved sampling for the scene shown in Figure 9 (20.6M points).

5. Results

Our algorithm supports multiple indirect light bounces at interactive frame-rates. However, in our test scenes, rendering 2-3 bounces already covered the dominant part of the global scene illumination (see Figures 7 and 9).

Figure 7 compares several light bounces in a small Cornell-Box point cloud scene (4.7M points and 256 VPLs) containing a few boxes and an Asian dragon model. Figure 8 illustrates the dependency of the frame rate of the number of light bounces, the number of VPLs and the size of the ISM-buffer used in this scene. At 128 and 64 VPLs, we used a smaller ISM buffer size (with a different resolution per ISM). Note that each light bounce requires an additional draw pass for all scene points to create a new ISM buffer.

Figure 9 shows our GI render mode in a scene containing the scanned dataset of St. Stephan's Cathedral in Vienna, comparing 1, 2 and 3 light bounces for a big spot light illuminating the floor of the cathedral from the ceiling. The scene was rendered using 256 VPLs. The view-frustum culled number of points rendered from this view angle lies at 20.6 million. Besides the point count, performance depends strongly on the number of indirect light bounces, the subdivision for interleaved shading, and the number of VPLs. Figure 10 compares the frame rates achieved at different settings for the cathedral point cloud scene.

The frame-rates shown for both the Cornell-Box and the Cathedral scene were achieved on a platform with an Intel Xeon X5550 2.67GHz CPU and a GeForce GTX 285 GPU with 1 GB RAM.

Due to the importance sampling of the VPL positions by hierarchical warping (concentrating more VPLs on shiny surfaces), we are also able to reproduce caustics from curved surfaces (as shown in Figure 11), without the use of additional VPLs. Further, the method of hierarchically warping the VPLs works against a weakness of Virtual Point Lights: the tendency to create sparkles when placed on surfaces with a high specular component (see Figure 12). The appearance of those sparkles often due to an undersampling of VPLs in highly shiny areas, which cannot be avoided in cases of too few VPLs in a scene with balanced reflection behavior. Another case of such lighting artifacts often appears if VPLs are placed close to corners in the scene geometry. Since the VPL itself represents a singularity in radiance, sparkle artifacts can again appear on surfaces very near to a VPL, focusing a big part of its radiated energy. A simple way to deal with these artifacts is clamping the light contribution of a VPL, which however introduces a bias to the scene illumination.

Figure 11: Left: caustic created by a ring. Right: Illumination of a parabolic surface, causing a highlight at its focal point at a nearby wall.
Caustics
Figure 12: Scene with highly glossy materials. Due to the approximation of area by VPLs, singularities (light sparkles) become visible at more glossy surfaces.
Singularities

6. Conclusions

We have shown a way to perform global illumination on point cloud scenes at real-time frame-rates. It benefits from the efficiency of Imperfect Shadow Maps for visibility calculation of Virtual Point Lights in our scenes. We are able to calculate diffuse and specular reflections for multiple indirect light bounces. Our approach works best in scenes containing geometry with a high diffuse reflection component and lower specular intensity, since surfaces with too high specular intensity can result in unintended sparkle artifacts.

Our implementation handles only one spot-light source, since this eases the way of performing importance sampling of VPLs over a limited area (in light view space) for the first bounce. In fact, directional light sources would work the same way using just orthogonal instead of perspective projection. Point lights on the other hand should be handled differently, since they would require eight Light-G-Buffers using the same G-Buffer setup. For this light source type, the use of two parabolic maps would be more encouraged, each one storing one of its hemispheres.

When extending to multiple light sources, to maintain equal frame rates the number of VPLs to be seeded per light source has to be inversely proportional to the number of lights. However, an importance driven distribution of the VPL budget over the different light sources, depending on the light sources' expected overall illumination contribution (light source intensity, distance to the viewer), could be applied to make such an algorithm more effective.

7. Acknowledgements

We want to thank Martin Knecht and Claus Scheiblauer. This research was supported by the Austrian FIT-IT Visual Computing initiative, project Terapoints (no. 825842).


References

[1] Stefan Brabec, Thomas Annen, and Hans peter Seidel. Shadow mapping for hemispherical and omnidirectional light sources. In Proc. of Computer Graphics International, pages 397-408, 2002.
[2] Petrik Clarberg , Wojciech Jarosz , Tomas Akenine-Möller, Henrik Wann Jensen. Wavelet importance sampling: efficiently evaluating products of complex functions. ACM Transactions on Graphics, 24(3), July 2005
[3] James T. Kajiya. The rendering equation. Computer Graphics (Proceedings of ACM SIGGRAPH '86), 20(4):143-150, New York, NY, USA, August 1986
[4] Alexander Keller. Instant radiosity. In ACM SIGGRAPH '97: Proceedings of the 24th annual conference on Computer graphics and interactive techniques, pages 49-56, New York, NY, USA, 1997. ACM Press/Addison-Wesley Publishing Co.
[5] Alexander Keller and Wolfgang Heidrich. Interleaved sampling. In Proceedings of the 12th Eurographics Workshop on Rendering Techniques, pages 269-276, London, UK, 2001. Springer-Verlag.
[6] T. Ritschel, T. Grosch, M. H. Kim, H.-P. Seidel, C. Dachsbacher, and J. Kautz. Imperfect shadow maps for efficient computation of indirect illumination. ACM Transactions on Graphics, 27(5):1-8, 2008.
[7] Michael Wimmer and Claus Scheiblauer. Instant points. In Proceedings of the Symposium on Point-Based Graphics 2006, Eurographics Association, pages 129-136, Boston, USA, July 2006.