Tamy Boubekeur, Patrick Reuter, Christophe Schlick
LaBRI - INRIA - CNRS - University of Bordeaux, France
http://www.labri.fr/~boubek
In the case of explicit reconstruction, the resulting surface can directly be enumerated, usually described as a polygonal mesh or a spline surface. Actually, explicit reconstruction methods can themselves be divided into two main categories. First, there are dynamic approaches, basically derived from the principle of deformable models developed in the field of computer vision [BI98] [KWT87] where an energy criterion [DYQS04] is minimized by surface deformation. For instance, the Intelligent Balloons [DQ01] provide an interesting topological flexibility for high-genus surfaces. Such dynamic approaches usually generate high-quality meshes, with in particular a semi-regular mesh topology coming from subdivision steps applied during the dynamic fitting. However, the global error optimization computation makes them extremely slow and thus only applicable to PBS with less than a few thousands of points. Then, there are combinatorial approaches, where the surface reconstruction is cast as a topology enumeration problem. One of the key ingredients of such methods is the Voronoi diagram which provides an optimal neighborhood information over the point set. Its geometric dual, the Delaunay triangulation has been extensively used for generating a triangular mesh over a point set [BC00,CSD02]. The Voronoi diagram offers several guarantees when the density information is available for the input point set. In particular, the PowerCrust algorithm [ACK01] as well as the Cocone algorithm [DGH01,DG03] are based on a 3D Delaunay triangulation from which a set of triangle consistent with a 2-manifold interpolating the point set is extracted when the -density is provided. Such methods are slow but their guaranti made them interesting for automatic surface reconstruction. At the other extremum of the spectrum, Gopi et al. [GKS00,GK00] proposed a lower dimensional reconstruction method, fast enough to be applied to larger point sets, but without guarantees. Linsen et al. [LP03] have also explored the idea of local triangulation by considering a simple topological structure, the
-neighborhood of each sample, and quickly generating fan clouds. Combinatorial approaches differ from dynamic ones by ensuring much more properties on the final surfaces obtained (interpolation, computation time) but without semi-regular connectivity. They have been successfully applied to PBS of up to a few millions of points in reasonable computation time (e.g. 1 hour for 1M points for the Cocone). All explicit methods share limitations on their ability to robustly handle the noise and to fill large holes, that may be present in the input point cloud.
In the case of implicit reconstruction, the resulting surface is defined implicitly as the zero-set of a function . Smoothness, volumetric description, natural hole filling and denoising are examples of the advantages of implicit surface fitting, just to name a few. Here again, most of the existing techniques fall into two main categories, both borrowed from the scattered data fitting literature [FN80]. In the first one, an implicit surface that interpolates the point set is computed as a linear combination of Radial Basis Functions (RBF). Initially limited to small point clouds [SPOK95,TO02], this approach has been extended to very large ones by using two different improvements: compactly supported basis functions [OBS04] and hierarchical partitioning combined with Partition of Unity methods [OBA+03] [TRS06,Wen02,LM02]. In the second one, implicit reconstruction approaches are based on the Moving Least Squares (MLS) approximation [Lev98], which allows one to reconstruct a piecewise polynomial function that approximates an unorganized point set. MLS can be used either as a projection operator [ABCO+01] or as an implicit fitting technique for polygon soups [SOS04]. Recent advances have extended their application to sharp feature detection [FCOS05] and fitting [RJT+05] as well as adaptive meshing of noisy point clouds [SFS05]. The main advantage of implicit reconstruction techniques is their robustness against non-uniformly sampled PBS. Their main drawback, as usual with implicit surfaces, is that they cannot be directly rendered with existing graphics hardware since they cannot be easily enumerated and thus require an additional conversion [LC87,Blo88,Blo94,KBSS01,OB02]).
Recently, most of the approaches have focused on implicit surface reconstruction of PBS. The main reason for this is that the principle of Partition of Unity makes it possible to achieve a purely local reconstruction, which results in a complexity, where
is the size of the point set. To our knowledge, this is currently not possible with an explicit reconstruction, which offers a
complexity at best. So for very large point sets, implicit reconstruction techniques should definitely be faster than explicit ones. But as stated above, implicit surfaces have to be tessellated before being rendered by the graphics hardware. As the existing work has shown, this tessellation is not that straightforward. So the pros and cons of both approaches are not that easy to balance.
The work presented in this paper is based on the following postulate: as the reconstruction of a PBS that guarantees geometric continuity cannot be achieved in linear complexity, loosening the constraints and seeking only for a ``visual continuity'', may reach such a linear complexity.
It should be noted that point-based splatting techniques [RL00,ZPBG01,BSK05] offer such a visual continuity by working in image-space. Unfortunately, splatting techniques require a specific modification of the graphics hardware pipeline to be efficient. Even if this modification is relatively easy with recent hardware (by using vertex or fragment shaders), splatting is still limited in terms of appearance and its performance quickly decreases when the resolution of the images is increased. We believe that object-space explicit reconstruction would definitely offer a better quality vs. cost tradeoff.
The approach that may be most closely related to ours is the patch generation proposed in [ABCO+01] for the visualization of MLS surfaces. Their algorithm generates a collection of patches which are projected onto the MLS surface. Unfortunately, their process produces strong visible artefacts as the patches are restricted to a grid topology, and the boundaries do not blend correctly in overlapping zones.
The remainder of this paper is organized as follows: Section 2 presents a general overview of our algorithm, Sections 3 to 6 focus more precisely on the different steps involved (reconstruction, partitioning, subdivision, rendering). Section 7 presents some experimental results and Section 8 concludes by proposing some future research directions.
As stated above, we are seeking for an explicit object-space surface reconstruction that should offer convincing visual continuity in linear time. One of the central ideas of our approach is to clearly separate the global topological extraction and the local geometric reconstruction of the surface. An overview of the algorithm is illustrated in Figure 1: after having space-partitioned an unstructured PBS, a local reconstruction, done at each cell of the partition, provides a set of disjoint 2-manifolds with boundaries, all of genus 0. Finally a visually continuous surface is obtained by an aggregation of these 2-manifolds.
Note that our current implementation requires a PBS where the points are all equipped with a normal vector. This is quite a usual assumption when working with PBS. When this normal vector information is not available, it is possible to generate it at each point by a principal component analysis of the neighborhood [HDD+92,GKS00].
In order to be able to account for large point clouds, a totally localized approach is chosen: all testing, sorting or selecting operations involved in the process will only be done on a reduced set of points. This principle would enable several extensions that we are currently studying: massively parallel implementation of the algorithm, out-of-core reconstruction [Sil03] or lazy reconstruction based on different criteria (see Section 6). Our algorithm can be decomposed in three steps:
![]() |
the set of input points |
![]() |
the ![]() ![]() |
![]() |
the space-partitioning cell associated to the partition ![]() |
![]() |
the radius of the cell ![]() |
![]() |
the number of points in the cell ![]() |
![]() |
the locally reconstructed surface on ![]() |
![]() |
the set of points lying in neighboring cells of ![]() |
![]() |
the ![]() ![]() |
![]() |
the normal vector of the point ![]() |
![]() |
the centroid of the partition ![]() |
![]() |
the average normal vector of the partition ![]() |
![]() |
the scalar product between two vectors ![]() ![]() |
3.1 Local Surface Reconstruction
![]() |
The kernel step of our algorithm is the local reconstruction process in the cells of the octree. For the safe of clarity, the initial partitioning step will only be explained in Section 4, because it strongly relies on the way the local reconstruction is done. The goals that we seek for this local reconstruction are the following:
Actually, as our data set is a PBS, we know that all points lie on a given surface: in other words, if we are sufficiently close to the surface, the problem can be considered as a 2D reconstruction one. In a reasonably small neighborhood, the surface can be considered as a local height map: each point can be expressed as a height function
according to some local average plane. If this assumption is fulfilled (see below), we can indeed use a 2D triangulation to reconstruct the topology between points. This will generate a 2-manifold made of triangles, interpolating the point cloud.
So all we need is a predicate that ensures us that the local point set
can be expressed as a height map. We consider that
is true when:
Both and
are intuitive enough for being interactively set by the user who can quickly tune the partitioning (see Section 4) according to the specific characteristics of the PBS. Note that this approach is only sub-optimal, because
is only determined by the partitioning step and thus does not maximize the area of height surfaces. We mainly choose this formulation for its speed and low implementation cost. One of the main advantages to have a double criteria, acting simultaneously on point distribution and normal distribution, is that a whole set of ``difficult'' cases are correctly detected (see Section 7).
When is true for all
, the reconstruction can be totally achieved in 2D, and a piecewise 2D Delaunay triangulation will generate a collection of 2D manifolds. Note that Gopi et al. [GKS00] have explored a similar idea of lower dimensional reconstruction, but they considered only isolated points. We propose a more general approach working in a whole local cell
provided by the partitioning step. In other words, our projection-based method is purely driven by the local geometry of the input PBS, while Gopi's approach is constrained by the approximate topology imposed by a given
-neighborhood. Here is the algorithm used for each cell
:
Let us give some additional comments about this algorithm. First, the computation of the average plane is extremely fast when all points are equipped with normal vectors. As said above, if the normal vector information is missing, it can be retrieved at a reasonable cost by performing a principal component analysis on , see [HDD+92] for more details. Second, our projection is similar to the one proposed by Gopi et al. [GKS00], but we consider a common local plane for the whole partition
that belongs to the cell
, which is much more efficient. Finally, we choose to implement an incremental Delaunay triangulation. This is not the optimal choice (in general, the Fortune's algorithm would be better) but as the local sets
are small, there is no strong difference. On the other hand, incremental Delaunay is interesting as we aim to deal with progressive transmission of PBS, such as raw data coming from the 3D range scanner.
![]() |
![]() |
To ensure a visual continuity, we propose to create an overlapping between neighboring surfaces. This can simply be done by applying the local reconstruction algorithm on partitions that are temporarily enlarged to their neighborhood (see Figure 4). More precisely, each cell is enlarged by a factor to include points belonging to neighboring cells. The value of
can be set interactively by the user or can be deduced from the overall density of the PBS. If the PBS has a known
-density (i.e.
is the maximal value such that each ball of radius
, centered at each point of
contains no other point), we just have to enlarge the support with
for ensuring a right overlapping. Unfortunately, this density information is not always available. In that case, we may use the following value of
on the cell size: in each cell
, the local reconstruction operator is applied to
where
is the set defined by:
and
. Experimental results have shown that
offers good results on our whole set of various models. Note that we need to preserve the height map property during the enlargement process, and thus we have to test the validity of the predicate
.
The local reconstruction operator is interpolating which means that the local meshes reconstructed in neighboring cells share the points that belong to the overlapping zone. As the Delaunay triangulation always generates a locally optimal triangulation, very often the overlapping zone shared by two neighboring cells is triangulated exactly in the same way for both cells, which offers a perfect correspondence of the generated triangles. Sometimes, the overlapping is only approximate (e.g., very high curvature over density ratio) but even in that case, there is no strong degeneration which means that a visually continuous feeling is always provided during the rendering, even under a strong close-up (see Figure 5).
![]() |
The cell enlarging mechanism used by our algorithm implies that the total number of points
Each point added to the triangulation needs to be tested toward the whole set of generated triangles. Thanks to a randomized incremental triangulation, the computational complexity can be reduced to when inserting
points into the triangulation. In our approach, the
points of the initial point set are distributed on
octree leaves with
. If we raise the number of cell points to an arbitrary constant value
, the total number of leaves will be
cells. The computational complexity for a cell can then be bounded by
, which is a constant. So the complexity of the complete triangulation is about
, which is equivalent to
. To be exhaustive, we also have to include the complexity of projection and re-projection operators, but both of them are also linear. So, as expected by our initial postulate, the global complexity of our reconstruction technique is linear in the number of points, both for the storage and the computation. Note that the partitioning time, in
, has no strong influence in pratical cases (see Table 1).
![]() |
Our local approach imposes a partitioning structure that approximates the global topology to allow the enlarging process of the partitions for the overlapping. Among possible candidates, we have chosen the octree for its hierarchical multiresolution structure with cells of regular shapes, and for the geometrically adaptive approximation that it provides for the topology of the point cloud. In particular, the genus of the surface will be easily retrieved (see Figure 6), which simplifies the local reconstruction step. Moreover, as we will see, one drawback usually pointed out with the octree, that is the affine dependance of the generated partitioning (i.e. the partition is different according to the position and the orientation of the initial bounding box) can be dramatically reduced in our case (see below).
![]() |
With the help of the
In order to preserve a good balance between the computational cost of all the local triangulations, a cell that satisfies
will nevertheless be subdivided if
exceeds a given threshold value
(we use
in our implementation). In addition to balancing the computational cost, we have noted that a small maximum number of points by cell also decreases the affine dependance of the partitioning as it also balances the size of the leaves (see Figure 8).
![]() |
In order to achieve an overlapping between locally reconstructed surfaces that respects the global topology of the object, we use the volumetric neighborhood provided by the octree. As the leaves of the octree are exclusively made of cubes, the neighborhood searching can be done with a very simple algorithm:
Let and
be two cells of the octree
, let
and
(respectively
and
) be the edge length and the center of
(respectively
). Let
be the edge length of the smallest cell of
and
be the Euclidean norm of the projection of
onto the axis
, ie. the absolute value of its
-coordinate. Then,
if and only if:
In words, it means that the surface is localized in the volumetric layer approximated by the leaves of the octree, this layer represents a good approximation of the neighborhood onto the surface. By using this algorithm, computing the neighborhood of the whole set of cells for a point cloud made of 450 000 points needs less than a second on a standard workstation. Such a speed allows the user to interactively tune the partitioning parameters if the topology does not seem to be correctly retrieved.
![]() |
After the partitioning (Section 4) and the local triangulation (Section 3), we have a collection of 2-manifolds that are disjoint but do overlap. Each manifold is defined by a triangular mesh made of about 40 triangles. The next step of our algorithm is the application of a subdivision scheme on each surface . After having tested several classical schemes, it appeared that the Loop subdivision scheme [Loo87,ZS00] achieves the best results for our purpose: it ensures a limit surface that is
almost everywhere (
only at extraordinary vertices) which offers a nice noise filtering feature, it is able to reproduce sharp edges if required [HDD+94,BLZ00].
In our algorithm, the interest of using subdivision is double. First, a few subdivision passes applied on the set of overlapping surfaces will increase the final visual quality by converging all
to a smooth surface (see Figure 9). Second, the subdivision step makes it possible to better glue together overlapping surfaces
. Let us detail this process:
is constructed as a good local approximation of the underlying surface in the cell
, but not necessarily in its neighborhood
(i.e. in the overlapping zones between
and its neighboring surfaces). Let
be the part of
localized outside of
(overlapping zones). To glue
on
, we simply propose to project each point
of
generated during the subdivision on the surface
.
An efficient projection is to compute the intersection between and the ray starting from
along its normal vector
. This solution does not guarantee to find an intersection for each point
but is sufficient in practice. A more robust projection operator could be developed, based for instance on the MLS surface defined by the points of an overlapping zone, but it will obviously be much slower.
![]() |
Like all explicit reconstruction techniques, our algorithm generates polygonal surfaces which can be directly handled by the graphics hardware pipeline. Moreover, the octree partitioning provided for the resulting surface aggregate can be used to speed up the rendering by improving occlusion detection algorithms [Dur00]. Our current implementation is based on the OpenGL API and offers an interactive visualization of point-based surfaces that takes full benefit from the whole optimized pipeline implemented in graphics hardware. Figure 10 (left) shows an interactive rendering of the Stanford dragon using a conventional Gouraud shading with 3 colored point light sources. Figure 10 (center) shows a bottle rendered with hardware environment mapping. Note that even if the bottle is very non-uniformly sampled, the reconstruction is very smooth and does not show any artefact in the specular reflection.
Ray-tracing is another common way to obtain high-quality rendering. A straightforward approach could be to perform the surface reconstruction with our algorithm, and then to submit the resulting object to the ray-tracing engine. However, there is no doubt that for an image, or even an animated sequence, a large part of the surface is likely to stay hidden, so reconstructing these hidden parts is of no use. We propose here to perform a lazy reconstruction that takes benefit from our local approach. The idea is to achieve a tree pruning and to reconstruct the surface only in the leaves of the octree that are actually intersected by the rays. With such a lazy reconstruction, the smaller the leaves, the better the pruning, so the optimization is particulary efficient for large point-based surfaces. Our current implementation provides a plugin to the well-known Povray ray-tracing engine [Pov04]. Figure 10 (right) shows the Stanford Dragon rendered with that plugin.
|
As expected, the main problem generated by our method may occur in the overlapping zones. It is vital for the sampling to be dense enough in high curvature zones (actually, this is more a sampling problem than a reconstruction one) because the projection operator used for the 2D triangulation may degenerate. Fortunately, the problem can often be solved by allowing an additional subdivision of the octree cell. As mentioned above, our partitioning is fast enough to tune the intuitive parameters of the partitioning according to the features of the object.
Figure 11 shows the evolution of computation time according to the number of points. We used the Stanford dragon at different resolutions that were obtained by down-sampling the original high resolution model. As expected, the curve presents a linear behavior when the size of the model is increased.
![]() |
We have also tested the variation of the computation time, according to the threshold value (i.e. maximum number of points allowed per cell) used during the partitioning step. The Figure 11 shows also that a good performance is maintained, between 5 and 40 points by cell. This test has been done with the Stanford dragon model with 437 646 points.
The results provided by our first implementation are encouraging and offer some interesting directions for future work:
Acknowledgements: We would like to thank the Stanford Computer Graphics Laboratory and Polygon Technology for providing the models used in this article.