Recently, we have witnessed to the rise of neural geometry as a result of blending geometry and deep learning. Indeed, deep learning provides us with alternative representations for geometry [2021-Bronstein]; in particular, for implicit surfaces, piecewise-linear surfaces (i.e., meshes), graphs, point clouds, and so forth. The leading idea of this challenge is to investigate new avenues to process and model implicits. Note that no neural network has been introduced in the literature so far to represent families of dynamic level sets, nor implicits with singularities.
State-of-the-Art
In the literature, we find various implicit representations. Let us recall two examples of such representations. First, the classical constructive solid geometry (CSG) [1984-Requicha], which is essentially a Boolean algebra that combines implicitly defined convex objects through set-theoretic operators (i.e., union, intersection, and difference) to build more complex objects. Radial basis functions (RBF) constitute other class of local implicit functions which, when combined, allow us to build a global implicit function that interpolates samples (points) of an unknown surface [2018-Biancolini]. Nevertheless, these examples require a prohibitive number of Boolean operations and a prohibitive number of local functions to represent geometric objects like the Baronesse Sipierre model depicted in Fig. 1. On the contrary, neural networks provides a compact representation for implicits with arbitrary precision, which is ensured by the universal approximation theorem [1989-Cybenko]. This neural representation is a matrix of weights of the neural network after training. Geometry (combinatorial geometry and differential geometry) put forward new challenges beyond those found in 2D image analysis, and this the main reason why only recently deep learning methods have called attention of the computer graphics community.
Research Methodology
Given the in-house expertise in piecewise-linear and implicit surfaces [2009-Gomes], we are particularly interested in investigating and developing a neural implicit framework that takes advantage of the differentiable properties of deep learning networks to approximate point-sampled surfaces as level sets of neural implicits (i.e., neural implicit functions). We also intend to take advantage of latent space to cluster families of implicit shapes, making it possible to carry out shape interpolation in shape space.
References
[1984-Requicha] Requicha, A. and Voelcker, H. (1984): Boolean operations in solid modeling: boundary evaluation and merging algorithms. Technical Memorandum #26, Production Automation Project, College of Engineering & Applied Science, The University of Rochester, New York, USA.
[1989-Cybenko] Cybenko, G. (1989): Approximation by superpositions of a sigmoidal function. Mathematics of Control, Signals and Systems, 2(4):303-314.
https://doi.org/10.1007/BF02551274
[2009-Gomes] Gomes, A., Voiculescu, I., Jorge, J., Wyvill, B., and Galbraith, C. (2009): Implicit Curves and Surfaces: Mathematics, Algorithms, and Data Structures. Springer-Verlag London.
https://doi.org/10.1007/978-1-84882-406-5
[2018-Biancolini] Biancolini, M. (2018): Fast Radial Basis Functions for Engineering Applications. Springer International Publishing.
https://doi.org/10.1007/978-3-319-75011-8
[2019-Park] Park, J., Florence, P., Straub, J., Newcombe, R., and Lovegrove, S. (2019): DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR’19), Long Beach, CA, USA, IEEE Press, pp. 165-174.
https://doi.org/10.1109/CVPR.2019.00025
[2021-Bronstein] Bronstein, M., Bruna, J., Cohen, T., and Veličković, O. (2021): Geometric Deep Learning: Grids, Groups, Graphs, Geodesics, and Gauges, arXiv:2104.13478v2 [cs.LG].
https://doi.org/10.48550/arXiv.2104.13478