Those meshes, if they are what one would expect from real-world usage (which sometimes differs because papers use perfect inputs versus real-world ones), are pretty damn good.
This may mean that meshing is near solved, dare I say it. This is pretty momentous if it really is solved now.
(As another commenter says, we still are looking to see texturing and materials to be solved.)
I've seen in the last twenty years quite a few papers with novel meshing and remeshing algorithms. A few of them make it to specialty systems like gmsh, less of them make it to CAD and DCC end-products which are readily usable. Of those that make it there, not an insignificant fraction will crash your system from time to time (Quadremesh in ZBrush comes to mind).
Meshing algorithms are hard. They have cases and cases, and some of those cases are statistically impossible and thus discounted by time-pressed developers who forget that CAD and DCC programs have buttons to realize statistically impossible outcomes (i.e. "align these three vertices in a line"). In that sense, maybe having an ML thing with a random number generator somewhere producing statistically sane outcomes could be great ... :-)
We may need to temper our expectations. Those point clouds are "too clean".
Not exactly what you get in the real world, even with plenty of cleanup.
I didn’t read the paper but I half suspect those point clouds were randomly sampled from perfect meshes themselves thus maybe they are perfect clouds?
I wish I had time to dig into this. At least this is still a good result on the path to perfect automatic meshing.
The paper hints that these are indeed generated from pre-existing meshes. The language isn't super clear about it, but they are.
I've tried mesh anything v2 in the past, which is mentioned in the prior art, and it never succeeded in my experience. Maybe this is superior, but I would need to try it.
One must also acknowledge the inherent biases of point clouds retrieved from pre existing meshes.
I thought this was going to be a competitor to Google's CAT3D or Microsoft's TRELLIS but is a different type of tool (still cool) for creating meshes from point clouds.
Relatedly, do point clouds generated from fixed point imagers include "negative ray" data for where points can't be?
The meshes look great.
Can't wait for this team to start tackling texture coordinates and texture packing.
Texture coordinates can already be represented as a constrained minimization problem so it already is "solved" in the sense that you can brute force any quality level by putting enough CPU power into a solver like http://ceres-solver.org/