Re-contextualizing the standing Sekhmet statues in the Temple of Ptah at Karnak through digital reconstruction and VR experience

Recent trends in the Digital Humanities – conceived as new modalities of collaborative, transdisciplinary and computational research and presentation – also strongly influence research approaches and presentation practices in museums. Indeed, ongoing projects in museums have considerably expanded digital access to data and information, documentation and visualization of ancient ruins and objects. In addition, 3D modelling and eXtended Reality opened up new avenues of interacting with a wider public through digital reconstructions that allow both objects and sites to be presented through visual narratives based on multidisciplinary scholarly research. The article illustrates the use of 3D digital reconstruction and virtual reality to re-contextualise standing statues of Sekhmet in the Temple of Ptah at Karnak, where they were found in 1818. Today, they are on display at Museo Egizio, Turin. The theoretical framework of the research and the operational workflow – based on the study of the available archaeological, textual, and pictorial data – is presented here.

3D Heritage Data Fruition and Management. Point Cloud Processing for Thematic Interpretation

Technologies and digital tools such as laser scanning and photogrammetry are nowadays widely used in the field of architectural heritage survey, being able of producing 3D models characterized by high metric and morphological accuracy. These databases are becoming essential also for the development of more effective interventions on heritage buildings. Despite the advancement of increasingly automated analytical procedures, the management and analysis of point cloud models can still be quite time-consuming and complex, depending on specific assessments to be carried out. In the direction of optimizing these processing steps, several research is being carried out by applying Artificial Intelligence processes to make predictions based on sample data. The aim of the paper is to analyse point clouds processing focusing on geometric and radiometric features for diagnostic analysis. A specific focus aims at analysing possible in-depth uses of the intensity value as a benchmark for historical surfaces assessment, toward an optimized models’ interpretation and classification of the 3D data points, integrating data and information from different sensors. Point clouds under analysis have been carried out by different acquisition techniques; this provides an interesting opportunity to compare the results in terms of intensity value produced by different sensors. The paper will analyse the State of the Art, also illustrating a set of outcomes obtained by the authors, deepening two specific case studies, in order to outline not only the main background and shortcomings in managing complex database, but also possible innovations pointing out new research questions.

Comparative Analyses Between Sensors and Digital Data for the Characterization of Historical Surfaces

In the “informational” potential included in 3D digital models obtained by laser scanner survey, several data can be processed in addition to geometric features in order to widen the application of digitization to conservation of Cultural Heritage. Among these data, the intensity value is a (potentially) powerful knowledge concerning the interpretation of surfaces. The visual-comparative analysis developed over years of experimentations demonstrate the need to target research towards comparative data, “sampling” the reflectance of different materials, measured with different sensors and in environments with different boundary conditions. If until recently the process already presented interesting research directions in terms of calibration or control of results on specific materials but difficult at a comparative level, today, thanks to new data segmentation processes and algorithmic procedures, advancements and further comparisons will be possible opening to new interpretative hypotheses. The paper explores ongoing experimentations aimed at comparing different radiometric and colorimetric data obtained by 3D surveying using different laser scanner technologies on historical surfaces, to support the identification of features directly on the 3D model. The goal is to test the link between the intensity value and materials, construction techniques, and decay pathologies, in order to use, in the future, also this parameter as a radiometric feature in machine learning segmentation and classification algorithms. The contribution develops and deepens at the application level the theoretical background and the first experiments carried out on two case studies.

NEURAL RADIANCE FIELDS (NERF) FOR MULTI-SCALE 3D MODELING OF CULTURAL HERITAGE ARTIFACTS

This research aims to assess the adaptability of Neural Radiance Fields (NeRF) for the digital documentation of cultural heritage objects of varying size and complexity. We discuss the influence of object size, desired scale of representation, and level of detail on the choice to use NeRF for cultural heritage documentation, providing insights for practitioners in the field. Case studies range from historic pavements to architectural elements or buildings, representing diverse and multi-scale scenarios encountered in heritage documentation procedures. The findings suggest that NeRFs perform well in scenarios with homogeneous textures, variable lighting conditions, reflective surfaces, and fine details. However, they exhibit higher noise and lower texture quality compared to other consolidated image-based techniques as photogrammetry, especially in case of small-scale artifacts.

Connecting geometry and semantics via artificial intelligence: from 3D classification of heritage data to H-BIM representations.

Cultural heritage information systems, such as H-BIM, are becoming more and more widespread today, thanks to their potential to bring together, around a 3D representation, the wealth of knowledge related to a given object of study. However, the reconstruction of such tools starting from 3D architectural surveying is still largely deemed as a lengthy and time-consuming process, with inherent complexities related to managing and interpreting unstructured and unorganized data derived, e.g., from laser scanning or photogrammetry. Tackling this issue and starting from reality-based surveying, the purpose of this paper is to semi-automatically reconstruct parametric representations for H-BIM-related uses, by means of the most recent 3D data classification techniques that exploit Artificial Intelligence (AI). The presented methodology consists of a first semantic segmentation phase, aiming at the automatic recognition through AI of architectural elements of historic buildings within points clouds; a Random Forest classifier is used for the classification task, evaluating each time the performance of the predictive model. At a second stage, visual programming techniques are applied to the reconstruction of a conceptual mock-up of each detected element and to the subsequent propagation of the 3D information to other objects with similar characteristics. The resulting parametric model can be used for heritage preservation and dissemination purposes, as common practices implemented in modern H-BIM documentation systems. The methodology is tailored to representative case studies related to the typology of the medieval cloister and scattered over the Tuscan territory.

Versioning Virtual Reconstruction Hypotheses: Revealing Counterfactual Trajectories of the Fallen Voussoirs of Notre-Dame de Paris using Reasoning and 2D/3D Visualization

Virtual reconstruction should move beyond merely presenting 3D models by documenting the scientific context and reasoning underlying the reconstruction process. For instance, the collapsed arch in the nave of Notre-Dame de Paris serves as a case study to make explicit the reconstruction argumentation encapsulated in relation to the spatial configuration of the arch and the voussoirs. The experiment is twofold: (1) setting up of the 3D dataset where the hypotheses are modeled as versions using logic programming, and (2) evaluating the scientific narrative of reconstruction through both a custom 2D-3D visualization and competency questions on the enriched 3D data. Formalization, reasoning, and visualization are combined to explore the nonlinear scientific hypotheses and narrative of the reconstruction. The results explicitly show both the factual information on the physical and digital objects, as well as the counterfactual propositions allowing the reasoning at play in the reconstruction. The hypotheses are visualized as counterfactual trajectories creating an open dynamic visualization that makes possible the spatialized querying of conflicting interpretations and embedded memory in place.

A Proposal of Integration of Point Cloud Semantization and VPL for Architectural Heritage Parametric Modeling

Current architectural survey processes utilize point clouds generated by laser scanning and digital photogrammetry. Increasingly, these surveys produce 3D models, particularly parametric models, in what is known as the “scan to 3D model” or “scan to BIM” process. However, the phases of analysis and classification of architectural elements, segmentation and semantization of point clouds, and semi-automatic modeling remain complex and labor-intensive and require an active role commitment of the scholar or modeler. These steps are usually performed manually, resulting in high subjectivity and low reproducibility. This paper proposes a reproducible workflow that automatically segments point clouds, identifies geometric shapes by comparing them with a library of ideal geometries, and extracts necessary points for modeling through mathematical analysis. The extracted information is then processed using a visual programming algorithm, imported into the VPL environment, and used for automated modeling. Initial results from an ongoing experiment on the automated modeling of vaults using point clouds from surveys are presented.