e03cf95ddc6691467e3e5af6d0366ec3.ppt

- Количество слайдов: 37

High-Pass Quantization for Mesh Encoding Olga Sorkine, Daniel Cohen-Or, Sivan Toledo Eurographics Symposium on Geometry Processing, Aachen 2003

Overview n n n Geometry quantization Visual quality Connection to spectral properties

Geometry quantization – introduction n n Each mesh vertex is represented by Cartesian coordinates, in floating-point. Geometry compression requires quantization, normally 10 -16 bits/coordinate (xi, yi, zi)

Geometry quantization – introduction n Using smoothness assumptions, the quantized coordinates are predicted and the prediction errors are entropy-coded [Touma and Gotsman 98]

Quantization error n n Quantization necessarily introduces errors. The finer the sampling, the more it suffers.

Quantization error n An example: coarsely-sampled sphere original Quantized to 8 bits/coordinate

Quantization error n A finely-sampled sphere with the same quantization original Same quantization to 8 bits/coordinate

Quantization error – discussion n Quantization of the Cartesian coordinates introduces high-frequency errors to the surface. High-frequency errors alter the visual appearance of the surface – affect normals and lighting. Only conservative quantization (usually 12 -16 bits) avoids these visual artifacts.

Quantization – our approach n n Transform the Cartesian coordinates to another space using the Laplacian matrix of the mesh. Quantize the transformed coordinates. The quantization error in the regular Cartesian space will have low frequency. Low-frequency errors are less apparent to a human observer.

Relative (laplacian) coordinates n Represent each vertex relatively to its neighbours the relative coordinate vector average of the neighbours

Laplacian matrix n A Matn n(R) the adjacency matrix D Matn n(R) the degree-diagonal matrix: n Then, the Laplacian matrix L Matn n(R) is: n

Laplacian matrix n The previous form is not symmetric. We will use the symmetric Laplacian:

Properties of L n Sort the eigenvalues of L in accending order: “frequencies” eigenvectors n We can represent the geometry in L’s eigenbasis: low frequency components high frequency components

Previous usages of Laplacian matrix n [Karni and Gotsman 00] – progressive geometry compression – Use the eigenvectors of L as a new basis of Rn – Transmit the coordinates according to this spectral basis – First transmit the lower-eigenvalue coefficients (low frequency components), then gradually add finer details by transmitting more coefficients.

Previous usages of Laplacian matrix n [Taubin 95] – surface smoothing – “Push” every vertex towards the centroid of its neighbours, i. e. – v’ = (I – L)v – Iterate, with positive and negative values of (to reduce shrinkage effect)

Previous usages of Laplacian matrix n [Ohbuchi et al. 01] – mesh watermarking – Embed a bitstring in the low-frequency coefficients – Changes in low-frequency components are not visible n [Alexa 02] – morphing using relative coordinates – Produces locally smoother morphs n [Gotsman et al. 03] – A more general class of Laplacian matrices – Mesh embedding on a sphere using eigenvectors

Quantizing the - coordinates n Transform Cartesian to -coordinates: n Quantize -coordinates (fixed-point quantization) n To get back Cartesian coordinates:

Discussion of the linear system n n 1 The matrix L is singular, so L doesn’t exist. Adding one anchor point fixes this problem (substitute one vertex (x, y, z) – removes translation degrees of freedom)

Discussion of the linear system n By quantizing the , we put high-frequency error into . n L has very small eigenvalues, so L 1 has very large eigenvalues ( 1/ ) n 1 Thus, L amplifies small errors and reverses the frequencies. Small quantization error for , high frequency NOT so small !! low frequency

Spectrum of quantization error n n Write x as: x = a 1 e 1 + a 2 e 2 + … + an en Therefore, = Lx = 1 a 1 e 1 + 2 a 2 e 2 + … + n-1 an-1 en-1 + nan en Small i – low frequencies n Quantization error for ( + q ) is: q = c 1 e 1 + c 2 e 2 + … + cn-1 en-1 + cn en low frequencies – small ci n large i – high frequencies high frequency error – here ci are large Resulting error in x: qx = L 1 q = (1/ 1)c 1 e 1 + (1/ 2)c 2 e 2 + … + (1/ n-1)cn-1 en-1 + (1/ n)cn en (1/ i) is large – amplifies low-frequency errors (1/ i) is small – attenuates high-frequency errors Thus, the error in x will contain strong low-frequency components but weak high-frequency components.

Discussion of the linear system n Example of low-frequency error: – Find the differences between the horses…

Discussion of the linear system n Example of low-frequency error: – This one is the original horse model

Discussion of the linear system n Example of low-frequency error: – This is the model after quantizing to 8 bits/coordinate – There is one anchor point (front left leg)

Making the error lower n n We add more anchor points, whose Cartesian coordinates are known, as well as the - coordinates. This “nails” the geometry in place, reducing the lowfrequency error

Rectangular Laplacian n n We add equations for the anchor points By adding anchors the matrix becomes rectangular, so we solve the system in least-squares sense: L constrained anchor points

Choosing the anchor points n n n A greedy scheme. Add one anchor point at a time. Each time nail down the vertex that achieved the maximal error after reconstruction. This process is slow, but it is done only by the encoder. Only a small number of anchors is needed. We experiment with 0. 1%, which gives very good results.

The effect of anchors on the error Positive error – vertex moves outside of the surface 0 – Negative error – vertex moves inside the surface -quantization 7 b/c 2 anchors Cartesian quantization 8 b/c -quantization 7 b/c 4 anchors -quantization 7 b/c 20 anchors

Visual error metric n n Euclidean distance between and does not faithfully represent the visual error (Cartesian quantization errors are small but the normals change a lot. . . ) Karni and Gotsman [2000] propose a “visual metric”: ║x – x’║vis = ║x – x’║ 2 + (1 – )║GL(x) – GL(x’)║ 2 = 0. 5 n We are not sure that should be 0. 5…

Visual error metric n We measured the two error components separately: Mq = ║x – x’║ 2 Sq = ║GL(x) – GL(x’)║ 2 Evis = Mq + (1 – ) Sq

Rate-distortion curves

Some results We compare to Touma-Gotsman predictive coder that uses Cartesian quantization original -quantization, entropy 7. 62 Cartesian quantization, entropy 7. 64 Evis[ =0. 5] = 5. 3 Evis[ =0. 15] = 2. 3 Evis[ =0. 5] = 2. 5 Evis[ =0. 15] = 2. 6

Some results We compare to Touma-Gotsman predictive coder that uses Cartesian quantization original -quantization, entropy 6. 69 Cartesian quantization, entropy 7. 17 Evis[ =0. 5] = 1. 8 Evis[ =0. 15] = 0. 9 Evis[ =0. 5] = 4. 8 Evis[ =0. 15] = 4. 9

Some results We compare to Touma-Gotsman predictive coder that uses Cartesian quantization original -quantization, entropy 10. 3 Cartesian quantization, entropy 10. 3 Evis[ =0. 5] = 6. 4 Evis[ =0. 15] = 3. 9 Evis[ =0. 5] = 5. 0 Evis[ =0. 15] = 5. 1

Time statistics n n The least-squares system was solved using QR factorization of the normal equations Implementation on 2 GHz P 4 using TAUCS library Models Factorization (sec. ) Solving (sec. ) 2718 0. 127 0. 006 Horse 19851 0. 980 0. 040 Fandisk 20111 1. 595 0. 056 Venus 50002 4. 803 0. 151 100086 10. 790 0. 318 Eight Max Planck # vertices

Conclusions n n The spectrum of the quantization error affects the visual quality of the quantized mesh We have proposed a quantization method that concentrates the error in the low-frequencies The method requires the computational effort of solving a linear least-squares system. Much more research is needed to fully understand the spectral behavior of meshes…

Acknowledgements n n n Christian Rössl and Jens Vorsatz from Max-Planck-Institut für Informatik for the models Israel Science Foundation founded by the Israel Academy of Sciences and Humanities The Israeli Ministry of Science IBM Faculty Partnership Award German Israel Foundation (GIF) EU research project ‘Multiresolution in Geometric Modelling (MINGLE)’, grant HPRN-CT-1999 -00117.

Thank you!