Paul Tice, 3D scanning consultant and specialist with 20 years of experience in the field, has predicted that the evolution of 3D scanning could one day make 3D modeling obsolete.
Tice is the CEO of Oregon’s ToPa 3D visulation and design company and his article published on LinkedIn, and picked up by 3D technology news site Spar 3D, has caused a stir. Specifically a statement referring to a point-cloud “baking” process used by Australian 3D imagers Euclideon.
The problem with point clouds
Point clouds are the data collected by 3D scanning hardware such as FARO’s Focus 3D Laser Scanner and the Einscan Pro from Shining 3D. The basic principle when capturing a 3D object is that the 3D scanners feedback a single point for wherever the light beam touches a surface. Millions of these points are created when scanning even the smallest of objects, this wealth of data can be problematic to manage. CAD software can connect the dots, but this is resource intensive process and perfecting finished models is often a painstaking process.
Euclideon’s Unlimited Detail algorithm, integrated into their SOLIDSCAN software, is capable of rendering point clouds in real time for interactive 3D visualization of a scan. The company say, “this algorithm efficiently grabs only one point for every screen pixel” instead of taking every tiny point into account.
Referencing Euclideon’s process, Tice created a stir with the following statement:
If Euclideon can animate point clouds as demonstrated in the video above, then it should be possible to assign metadata to each point with said data hosted perhaps as IP’s in the IoT. And, if that’s possible, then we at ToPa believe that one day, component and mesh modeling will be antiquated because intelligent models of the future will be composed of pure point cloud data.
Here Tice reference two potential visions of the future of point clouds. In the first instance; collecting metadata along with a point cloud, and in the second; intelligent 3D models independent of a mesh.
Educating 3D scanning in the Internet of Things
To unpack the first implication, feeding metadata to intelligent IoT systems could help the machines to react better to certain conditions. Information such as the location, color, texture, scale and how one point relates to other points can contribute to the advancement of areas such as virtual reality (VR).
VR is the topic of the moment for many tech companies after seeing the introduction of the HTC Vive headset. Proving VR’s potential for real-world applications beyond videogames, 3D software giants Dassault Systèmes have integrated the HTC Vive into their 3DEXPERIENCE product viewer, allowing users an immersive perspective on their 3D designs.
In the second instance ‘baking’ point clouds in this way, instead of meshing, will give faster and more reliable access to 3D visualization. Euclideon reference a 3D Google Earth type image as the future of such a process:
Imagine a 3D landscape that is large enough to stretch off to the horizon, yet where you can zoom in on individual, unique pieces of gravel scattered across the ground – all within less than a second.
It’s certainly an exciting prospect for gamers too. Consolidating sparse point cloud data as a surface instead of using polygons could give more realistic virtual worlds and lighten the load on a console’s RAM.
HOLOVERSE – FIRST EVER HOLOGRAM ROOM ARCADE by EuclideonOfficial on YouTube
The conclusion drawn by Tice is still however our favourite potential implication:
Imagine entire planets surveyed with LiDAR and viewed in real time
We don’t know about you, but we’re certainly ready for a digital walk on the moon.
Featured image shows a 3D point cloud environment rendered by Euclideon’s SOLIDSCAN process. Image via EuclideonOfficial on YouTube