Computer graphics meets image fusion: The power of texture baking to simultaneously visualise 3d surface features and colour


Since a few years, structure-from-motion and multi-view stereo pipelines have become omnipresent in the cultural heritage domain. The fact that such Image-Based Modelling (IBM) approaches are capable of providing a photo-realistic texture along the threedimensional (3D) digital surface geometry is often considered a unique selling point, certainly for those cases that aim for a visually pleasing result. However, this texture can very often also obscure the underlying geometrical details of the surface, making it very hard to assess the morphological features of the digitised artefact or scene. Instead of constantly switching between the textured and untextured version of the 3D surface model, this paper presents a new method to generate a morphology-enhanced colour texture for the 3D polymesh. The presented approach tries to overcome this switching between objects visualisations by fusing the original colour texture data with a specific depiction of the surface normals. Whether applied to the original 3D surface model or a lowresolution derivative, this newly generated texture does not solely convey the colours in a proper way but also enhances the smalland large-scale spatial and morphological features that are hard or impossible to perceive in the original textured model. In addition, the technique is very useful for low-end 3D viewers, since no additional memory and computing capacity are needed to convey relief details properly. Apart from simple visualisation purposes, the textured 3D models are now also better suited for on-surface interpretative mapping and the generation of line drawings.

ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences