In this work we show how Physically Based Rendering (PBR) tools can be used to extend the training image datasets of Machine Learning (ML) algorithms for the recognition of built heritage. In the field of heritage valorization, the combination of Artificial Intelligence (AI) and Augmented Reality (AR) has allowed to recognize built heritage elements with mobile devices, anchoring digital products to the physical environment in real time, thus making the access to information related to real space more intuitive and effective. However, the availability of training data required for these systems is extremely limited and a large–scale image dataset is required to achieve accurate results in image recognition. Manually collecting and annotating images can be very resource and time–consuming. In this contribution we explore the use of PBR tools as a viable alternative to supplement an otherwise inadequate dataset.
