A 3D TEXTURE-BASED RENDERING ALGORITHM

 

 

S.I. Vyatkin

Institute of Automation and Electrometry SB RAS, Russia

sivser@mail.ru

 


 Contents

 

 


 

Abstract:

The problem of mapping a 2D texture array onto an arbitrarily oriented plane in 3D space and onto curvilinear surfaces is considered. The method of mapping a 3D texture array onto freeform and volumes is considered. High-speed and effective algorithm of volume rendering is proposed. New methods of object representation by using unified free forms and arbitrary algebraic surfaces are suggested as well. Proposed algorithm allows to lower complexity of computations, memory and bandwidth requirements in comparison with known methods. In this paper, we propose a texture memory management policy that substitutes the classical assignation policy of one texel per voxel, applied for the volume representation in texture space. The texture is an object (not mapped) changing the properties of another object. The feature of the texture is that any object may be the texture. A supplement to the data structure with consideration of the structure is that each object can have a reference to another object being a texture for it.

Key words: 3D Texture, Volume Visualization, Recursive Multi-Level Ray Casting. 

 

 

1. Introduction

 

Virtual reality systems, where viewer merges into the model world, require high quality visualization. Voxel-based methods have been developed which create a 3D view directly from the volume data. For the first time on PC-class computers, the VolumePro 500 and VolumePro 1000 accelerators by TeraRecon Inc. allow high quality, real time volume rendering (30 frames/sec.) [1].
The surface-based methods first create on intermediate surface representation of the object to be shown. Both surface-and voxel-based methods have their merits; the decision, which one should be used for a particular application depends on the visualization goals. We present a 3D texture-based recursive multi-level ray-casting algorithm for rendering of volume data like voxel data and real continuous functions.

 

2. Texture mapping

 

Texture mapping on surfaces is an affective method for increasing realism in computer graphics systems. The textures allow one, first of all, to simulate a color pattern on surfaces, and also transparency, sharp boundaries, moving objects, and many other special effects. The texture increases appreciably the image visual complexity for a rather small amount of calculations.

The term “texture” is defined in computer graphics as a 2D image transferred onto a 3D plane or a 3D curvilinear surface, or a 3D image mapped onto a 3D space.
Texture mapping includes geometrical mapping of a texture array onto a surface and image filtering that eliminates step lines, moires, and all other things referred to as aliasing in the literature on computer graphics.

The process of texture mapping on flat faces consists of two stages. The first stage is projective transformation, i.e., calculation of the texture map coordinates (u,v) corresponding to the coordinates (Xs,Ys) of pixel on the screen. The plane is parameterized if one sets on it the unit texture vectors U and V, and the reference point. The problem is that knowing the screen coordinates (Xs,Ys) to obtain the texture coordinates U and V  of the respective point of the plane in the coordinate system of the viewer. This transformation may be written in the matrix [2]:

-                                             (1)

The texture coordinates are related to the screen coordinates by the bilinear function

-,                                                                                      (2a)

-,                                                                                      (2b)

where (Xs,Ys) are the screen coordinates of a pixel; Ai, Bi, and Ci are the elements of the coordinate transformation matrix.
The second stage of texture mapping is filtering necessary for preventing aliasing. MIP-map (pyramidal) texture maps that are most suitable for hardware implementation are most frequently used [3].

By means of prefiltering, one obtains a set of square texture maps with different resolution for each object. Each texture map is associated with an integer value of the level of detail (LOD).
Depending on the distance to the face and its orientation, one chooses for work two texture maps with neighboring levels of detail. The criterion for the choice is the linear size of pixel projection onto the face. If the pixel projection covers less than two texels (texel is a texture map element), one observes aliasing. For faces with angles of inclination (between planes of the face and the screen) close to the right angle the texture pattern is heavily blurred. This effect is known as blurring [2]. 

Then, according to the texture coordinates, four texels are readout from each texture map. Trilinear interpolation over these eight values terminates the filtering process. Trilinear interpolation coefficients are fractions of the texture coordinates and the level of detail. Modifying the filtering process, one produces some special effects. For example, it is possible to have a texture pattern with a constant contour width.

 

 2.1 Calculation of the texture level of detail

We will consider the choice of the level of detail of a texture map. In the general case, the projection of “round pixel” onto the plane is an ellipse [2], whose major axis we will take as a filter diameter. The size of the ellipse depends on

The major axis of the ellipse is on the line of intersection of two planes. One of them is the textured plane, and the other is the plane in which the normal to the plane and the ray of vision lie. From this it follows that Æ = 2dD/ cosq, where q is the angle between the normal to the plane and the ray of vision; cosq = d/D (d is the distance from the origin of coordinates to the plane, and D is the distance from the viewer to the point of intersection of the ray of vision with the object surface). Hence,

 

 Æ = 2dD2/d » 2dZ2/d.          (3)

 

2.2 Trilinear interpolation

To obtain the final result, i.e., the color and the transparency in each pixel, we must perform trilinear interpolation [3]:

  -, (4a) 
  - (4b)
  -, (4c) 
  - (4d)
  - (4e)
  - (4f)
  - (4g)

 

                                                                                                                                                                                                                                                                                                                           

 

 

where fα, fβ are fractions of the texture coordinates of a high level of detail; cα, cβ are fractions of the texture coordinates of a low level of detail; c[cu,cv], f[fu,fv] are the values of color components at the point [u,v] for each of the levels of detail.

The initial data are eight-digit values of components of color and transparency. The bilinear interpolation coefficients α and β for each of the chosen levels of detail are obtained from U and V addresses. The coefficient of linear interpolation between different levels of detail g is chosen depending on the greatest diameter of pixel projection (ellipse) onto the texture plane.

 

2.3 Contour textures

The contour texture must provide the possibility of creation and mapping of contours with an arbitrary shape within the frames of the texture map, which do not blur upon approaching. Such a texture is applied for mapping letters, symbols, interfaces, etc. We will list the main properties of contour texture maps:

·         contour width is 1-2 pixels;

·         when approaching, the real value of LOD is negative;

·         the contour does not blur, the contour width remains the same;

·         in the region with non-negative values of LOD, the contour textures transform to usual ones, that is, RGBT textures;

·         the shape of the contour is sufficiently arbitrary, i.e., specification of the contour requires more than one bit per texel.

For processing of contour texture maps, one needs a channel of transparency. This allows obtaining of full-color images inside contours. However, obtaining of a full-value picture requires a double layer of texture faces: one for imaging inside the contour and the other for imaging outside it. After bilinear interpolation identical to that in the other channels, to obtain the contour we perform the following calculations:

- ,                                                                                              (5)
then follows correction:
-                                                                                                   (6)

, where T is the final result, t  is the result of bilinear interpolation, and Ps is the size of the pixel projection.
Additionally we perform operation of obtaining the contour:

-,                       (7)
where-is the fraction of the size of pixel projection

 

2.4 Texture addresses

To calculate a texture address, one requires texture coordinates (2a), (2b); the level of detail of a pixel, the value of the general order of texture coordinates, and relative displacement of the local and global texture coordinates. Then each of the texture coordinates is divided into four numbers.

The bilinear interpolation coefficients α and ß are used for trilinear interpolation of the texture:

fu = ( ( fiu - du ) & 0111 ) + ( du & 011 ),                                                                (8)
сu = ( ( ciu -du/2 ) & 011 ) + ( ( du/2 ) & 011 ),                                                       (9)

where fiu, fiv and ciu, civ are the texture coordinates of the current pixel, scaled according to the level of detail; du (dv) is the displacement of the grid of the local texture coordinates relative to the global ones (they are meaningful only for compressed texture maps and equal to zero an uncompressed texture). For fv, cv the calculations are similar. The texture address is formed from values of fu, fv, cu, and cv.

 

 

 

3. Function-based objects 

 

The characteristic feature of the proposed freeform representation is the fact that the main primitives are presented by second-order surfaces, i.e., quadrics. A primitive-quadric is the basis for constructing the rest of objects. The quadric is defined by a real continuous descriptive function of three variables (x, y, z) Еn in the form F(X) ³ 0. Quadrics are considered as closed subsets of the Euclidean space Еn defined by the descriptive function F(X) ³ 0 where F is the continuous real function; X= (x,y,z) - is the point at Еn specified by the coordinate variables. Here F(X) > 0 specifies points inside the quadric, F(X) = 0 – the points at the boundary, and F(X) < 0 - the points lying outside and not belonging to the quadric.

 

-
Fig. 1. The model coordinate system (M) in which the space inside the cube is subdivided.

 

3.1 Set of geometric objects

It is proposed to describe complex geometric objects by defining the function of deviation (perturbation function of the second order) from the basic triangles [4], planes and quadrics [5]. So, we proposed to describe geometric objects by defining the perturbation function from the basic quadric. The surface obtained will be smooth, and a small number of perturbation functions will be necessary to create complex surface forms. Thus, the problem of object construction reduces to the problem of quadric surface deformation in a desired manner rather than to approximation by primitives (polygons or patches). In addition, while solving the descriptive function in the form of inequality F(X) ³ 0, we can visualize not only the surface but also the internal structure of the object [6].

 

3.2 Set-theoretic operations

Let the objects G1 and G2 be defined as ¦1(X) ³ 0 and ¦2(X) ³ 0.

The binary operation of the objects G1 and G2 means operation G3=F¡(G1,G2) with the definition

¦3=y(¦1(X),¦2(X)) ³ 0,                                                 (10)

where y  is the continuous real function of two variables.

The geometric model should allow designing of objects and their compositions of infinite complexity. This is primarily achieved by means of Boolean operations of uniting and intersection. The whole scene is a kind of a tree. Each node of the tree is an object constructor performing logical operations its descendants, and vertices of the tree are primitives. When the object constructor is queried while rendering, the object addresses its descendants, transforms the obtained results, and gives the answer to the query. The descendant may be a primitive or another object constructor. While applying the geometric operations, rotations, displacements, and scaling to the object constructor it performs all these operations on its descendants, and in addition changes its Boolean function in the case of inversion.

-

Fig. 2. The screen coordinate system (P) and quaternary subdivision.

 

3.3 Recursive multi-level ray casting algorithm

We consider the geometric object that has the property of answering the request on intersection with a rectangular parallelepiped or a bar [5]. The negative answer guarantees that the object is not intersected and has no common points belonging to the intersection is done by recursive subdivision of the space inside the cube defined by boundaries of ±1 along each coordinate (see Figure 2).

The center of the cube matches the origin of the model coordinate system M whereas the plane Z= -1 coincides with the screen plane. At the first step of recursion, the initial cube is subdivided into four smaller subcube in the screen plane. At the stage of subdivision of space along the quaternary tree, 2-times compression and transfer by ±1 along two coordinates are performed. Assume, that domain of point search is a cube in which embed our object.

 

-

Fig. 3. The binary subdivision.

 

 

Then recursive subdivision of the domain applied: domain cuts by 2 planes, that perpendicular to the screen plane XY, into 4 bars. For each bar intersection test are executed. If the object intersects with given bar, then bar subdivides further. Otherwise, we exclude bar from subdivision. This corresponds with exclusion of the square areas in the screen, on which given bar (and therefore, object) are projected (see Figure 2). Using results of intersection test, we perform subdivision of subbars that fall within the quadric completely or, probably, partially, and the knowingly external subbars are eliminated from processing.

On some recursion level we obtain the bars with one pixel-wide footprint (slices). Then we start subdivision of slices in depth, i.e. along Z-axis (see Figure 3). Therefore, for each slice we determine first point, which contains object (the test for crossing is similar above described, but accordingly, with smaller number of coefficients). In other words, we traverse scene space (unit cube) by quad-tree, leafs of which (slices) is a roots of the binary subtrees. Coordinate system in which the algorithm subdivides cubic volume is called work or model space and is denoted M (see Figure 1). Coordinate system with camera (viewer) in its origin and viewing frustum is denoted P (see Figure 2).

Application of projective transformation extrapolates the rendering algorithm to pyramidal volumes and thereby allows us to generate images with perspective.

 

 

 

4. Texture mapping onto freeforms and volumes

 

In papers [7], [8] Solid Texturing of complex surfaces and phenomena intermediate between shape and texture by using space-filling applicative functions to modulate density were introduced. The model is essentially an extension of procedural solid texture synthesis, but evaluated throughout a volumetric region instead of only at surfaces. Authors present realistic representations of such shape+texture (hypertexture) phenomena as hair, fur, fire, glass, fluid flow and erosion effects. Hypertexture exists within an intermediate region between object and not-object. They  introduced a notion of generalized boolean shape operators to combine shapes having such a region as well.

We will give the definition of a texture in the “volumetric” sense, proceeding from its original definition in the “surface” version (described above). In addition, we will define uniquely the main notions.

Texture is detailing of a surface or a volume of the object mapped. Here it is essential that its concrete construction is not specified, i.e., the texture in the general case is not necessarily a 2D array of texels.

-
Fig. 4. A 3D array.

Mapped object is the entire 3D scene or its element containing information on its shape. For example, a scene represented by two ellipsoids, on the one hand, may be considered as an image of two independent objects, on the other hand, such ellipsoids may be considered as an object-combination of two ellipsoids; these two points of view are alternatives. Hence, in terms of objects, the texture is a scene element that changes the properties of another object and is not imaged when rendering. For example, two objects will represent a colored ball: a shape in the form of ellipsoid and an object-texture that contains a reference to the array of colors, which is mapped in a certain manner onto the ellipsoid with the use of linear, cylindrical, spherical, or some other parameterization.

The property of a mapped object is any parameter assigned to this object, which is used in calculating the resulting color of the pixel on the screen: object color, weight coefficients of the diffusion and reflection components of the Phong illumination model, translucency, etc. Thus, the texture can affect any parameter of the object mapped. Besides objects specified analytically, e.g., a quadric, it is possible to map objects representing scalar data arrays. This extends substantially the possibilities and the fields of application of the system in the following cases.

Several basic techniques of volume rendering were mentioned in [9]. Scalar array, though require considerable memory (about N2 or N3 for 2D and 3D arrays, respectively), compared with their equivalent representation in the form of multiple triangles, they are more compact and convenient in usage and manipulation.

-

Fig. 5. A 2D grid of heights.

Let we have an object representing some scalar array in some format known only to the object. We will define this object as an object-array. In the final analysis, two classes of objects will represent the main primitives of the system. One class is analytically described primitives (freeform surfaces and patches), and the other class is arrays of scalar data. The two classes have a more compact description compared with polygonal specification of objects (triangles).

The object-array can be implemented by several methods. It may encapsulate a one-dimensional, two-dimensional (see Figure 6), or three-dimensional array (see Figure 7), and also may be a grid of heights [10, 11, 12].

-

Fig. 6. A 2D array, encapsulated into an object-array.

-

Fig. 7. Visualization of the object-array encapsulating a 3D data array.

According to the definition given in this paper, the texture is an object (not mapped) changing the properties of another object. The feature of the texture is that any object may be the texture. For this purpose, it was proposed a slight change in the scheme of data processing and construction of a scene tree, which did not refer to the rendering algorithm itself. At the last level of subdivision recursion, the nearest visible voxel is calculated and parameters of this space element are requested from the mapped object (scene) for using them in calculation of pixel color. A supplement to the data structure with consideration of the structure is that each object can have a reference to another object being a texture for it (see Figure 8). If there is no reference, then the object requested about the value of its properties yields a default value. Otherwise, the parameters of the object in the current voxel undergo changes from the object-texture referred to by our object.

-

Fig. 8. A texture formed by an object-array and an ellipsoid.

Upon superimposing the texture on the object, an important fact is conversion of the coordinates from the texture space (U, V, W) to the object space (X, Y, Z). In our case, parameterization is necessary only for object-arrays. To be able to superimpose the texture on the surface and volume of the mapped object, the object-arrays are supplemented with the function of coordinate conversion from the model space M into the texture space T. When the object-array is requested about the texture value, the coordinate of the current voxel is converted from M to T: (X, Y, Z)®(U, V, W), then these coordinates are used as an address in the array. If, for instance, the array is two-dimensional, then one coordinate is not used. Three kinds of conversions of object coordinates to texture coordinates were implemented.

 

1.       Linear:

U      =      (1 + X) ¤ 2,

V      =      (1 + Y) ¤ 2,

W      =      (1 + Z) ¤ 2.

 (Transformation from binary cube to unit one.)

2.       Cylindrical:

U      =       arctg (Y ¤ X) ¤ 2p,

V      =       (1 + Y) ¤ 2,

W      =       2Ö(X² + Y²).

3.       Spherical:

U      =       arctg (Y ¤ X) ¤ 2p,

V      =       arctg (2Ö(X² + Y²) ¤ Z) ¤ 2p,

W      =       2Ö(X² + Y² + Z²).

We will represent an example of a data structure for a concrete scene. Figure 9 shows its image, and Figure 10 depicts a logical schematic of the scene.

-
Fig. 9. Two textured objects.


-
Fig. 10. Logical scheme of the scene

 

Conclusion

 The main manufactures of graphical accelerators are already striving to improve the realism not only at the cost of the number of polygons. Such technologies as Bump Mapping [13], Environment Mapped Bump Mapping, Normal Mapping, Elevation Maps, Relief Texture Mapping [14], Parallax Mapping and Parallax Occlusion Mapping allow one to increase the realism of an image by one order of magnitude without increasing the number of geometrical primitives.  

At the moment, all these effects are modeled at the cost of specific procedures. One of them is  bump mapping described in [13], its disadvantages are stringent limitations on the conditions of its applicability. For example, in the case of bump mapping, realism is achieved only at angles of view close to the right angle. A new technology has recently come into being; it is Displacement Mapping, which removes the restriction on the angle of view. These technologies are three-dimensional contrary to the bump mapping creating merely an illusion of 3D details of the surface.  

The technologies of texture mapping presented in this paper were implemented in software. Figures illustrate the obtained results. The proposed texture mapping technologies are suitable for hardware implementation. We will now summarize the foregoing. The realism is now increased not only at the cost of the number of primitives representing the scene, but also at the cost of simulation of special effects of environment and fine details of surfaces of scene objects.

-

Fig. 11. A textured functionally specified object of free form

 

 

References

 

1.       Dolgovesov B.S., Vyatkin S.I., Shevtsov M.Y. “Firmware complex for real-time volume rendering based on VolumePro 1000 accelerator” // Proceedings of VeonPC (Virtual Environment on a PC Cluster Workshop) 2003. Fraunhofer Institute of Media Communications(Sankt-Augustin,Germany): http://viswiz.gmd.de/VEonPC/2003/proceedings.html

2.           Paul S. Heckbert. Survey of Texture Mapping// IEEE Comput. Graph. and Applicat.,vol. 6, no. 11, p. 56, 1986.

3.           Lance Williams. Pyramidal Parametrics, Comput. Graph., vol. 12, no. 4, p. 270, 1983.

4.           S.I. Vyatkin, B.S. Dolgovesov. Freeform Surfaces and Patches Based on Scalar and Analytical Perturbation Functions, Proc. of GraphiCon ’98, Nizhny Novgorod, 2002.

5.           S.I.Vyatkin, B.S.Dolgovesov, A.V.Yesin, A.A.Zhigach, S.E.Chizhik, and R.A.Shcherbakov. Geometric Modeling And Visualization Of Functionally Defined Objects, Computer Graphics and Geometry, Vol. 3, No. 2, 2001 https://cgg-journal.com/2001-2/04.htm

6.           Sergei I. Vyatkin, Boris S. Dolgovesov Alexander V. Yesin. Visualization of 3D clouds using free forms, "Computer Graphics and Geometry, Vol. 4, No. 2, 2002 https://cgg-journal.com/2002-2/02.htm

7.           Peachey, Darwyn. Solid Texturing of Complex Surfaces. Proceedings of SIGGRAPH '85: 279-286.

8.           Ken Perlin and E.M. Hoffert.  Hypertexture. Proceedings of the 16th annual conference on Computer graphics and interactive techniques,  NY, USA, 1989. Pages: 253 – 262.  

9.           S.I. Vyatkin, B.S Dolgovesov, A.V. Yesin, et al. Voxel Volumes volume-oriented visualization system, Computer Graphics and Geometry, Vol. 2, No. 1, 2000 https://cgg-journal.com/2000-1/01.htm

10.        Sergei I. Vyatkin, Boris S. Dolgovesov, Valerie V. Ovechkin, et al.  Photo realistic imaging of digital terrains, free forms and thematic textures in real-time visualization system Voxel-Volumes, Proc. of GraphiCon ’97, MGU, Moscow, 1997, p. 35.

11.        S.I. Vyatkin, B.S. Dolgovesov, O.Y. Guimaoutdinov. Synthesis of Virtual Environment Using Perturbation Functions, volume III (Emergent Computing and Virtual Engineering), World Multiconferenceon Systemics, Cybernetics and Informatics Proceedings, Orlando, Florida, USA, July 22-25, 2001. p. 350.

12.        Vyatkin S.I., Dolgovesov B.S., Yesin A.V. Non-Polygonal Graphics and Volume-Oriented Rendering // Proceedings of the IASTED International Conference on Automation, Control, and Information Technology 2002, Novosibirsk, Russia, pp. 447-451.

13.        J.F. Blinn. Simulation of Wrinkled Surfaces, Computer Graphics. Proc. SIGGRAPH 78, vol. 12, no. 3, p. 263, 1978.

14.        Manuel M. Oliveira, Gary Bishop, David McAllister. Relief Texture Mapping. Proc. SIGGRAPH 2000 , New Orleans, Louisiana, July 23-28, 2000, p. 324.