Automatic digital photogrammetry is a relevant methodology that allows elaborating a three-dimensional model starting from digital photographs or videos.
It is a technology that is very widespread today to create three-dimensional models in various areas: topography, architecture, archeology, geology, medicine, and graphics. The great diffusion it has had in recent years is due to the availability of specific low-cost and simple use and the need for basic instrumentation consisting only of a digital camera and a medium performance PC.
In this series of tutorials dedicated to automatic photogrammetry, we will see all the steps to process a 3D model starting from simple digital photographs. We will talk about how this software work and we will see in particular, the operation of the 3DF Zephyr software. We will talk about how to correctly take photographs, of how to make corrections on the 3D model obtained and how to export it for the web and for 3D printing.
Do you want to get written your assignment? Contact us for cheap reliable essay writing service.
Automatic digital photogrammetry is a methodology that allows elaborating a three-dimensional model starting from two-dimensional images.
It is a set of techniques and technologies that fall within the research field of computer vision and which derive, like evolution, from traditional photogrammetry, the science that deals with extracting metric information from photographs.
The term automatic photogrammetry has recently been cleared through customs in specialist literature. Sometimes, we also talk about Structure-from-Motion, although this wording is inaccurate, as it only concerns the first part of the image matching process.
Making a three-dimensional model of an object means creating a corresponding digital, metric correct, and color copy.
The fundamental characteristic that the digital 3D model must possess is that it is metric correct: this means that the model must be on the right scale, and it must be possible to make precise measurements on it. A second important, albeit not fundamental, characteristic is that color. This condition is not always necessary and depends on the purpose for which the 3D model is made. It is used on the web or on applications for tablets and smartphones; the color must be realistic; on the other hand, it is not essential in the case of a model created in function of a reproduction in 3D printing with a technology that does not allow to reproduce the color (for example SLA, DLP or FFF).
The steps through which automatic photogrammetry software process two-dimensional images to extract three-dimensional models are always the same. This means that once you understand what is behind the processing, you can use any software. Furthermore, it is essential to know what the processing steps are to know how to take the photographs correctly and to be able to intervene if problems arise during the creation of the 3D model.
For example, if our 3D model has holes or is incomplete, what can we do? What could be the problem? If the software does not process all our photographs, how can we intervene? If we don't know how the software reasons, we will not be able to identify the problem and understand how to solve it.
The starting point is always a set of photographs, therefore a set of two-dimensional digital images that are processed by the software to extract three-dimensional data.
The processing of the images takes place through four distinct and successive phases:
1. Structure-from-motion (SfM) and Multiview Stereo Reconstruction (MVS): this is the fundamental phase, the most delicate and most extended moment in terms of software processing time. It is in this phase that the taking geometry of the photographs is reconstructed and the dense point cloud, commonly called the dense point cloud, or the raw data on which the subsequent processing is based, is processed;
2. Mesh reconstruction: starting from a dense point cloud, a continuous surface composed of polygons whose vertices are the points of the cloud is reconstructed.
3. the color is applied to the Mesh, which basically does not have the color attribute, in two alternative ways: color-per-vertex, that is, the color of the points of the dense point cloud is transferred to the mesh polygons; texture mapping, i.e., the images used in the relief are applied on the polygons of the mesh;
4. The 3D model needs to be developed to scale using at least one reference distance because this software is not able to deduce the size of the objects that appear in the photographs.
In order to reconstruct the three-dimensionality of a scene, it is necessary to reconstruct the shooting position of the individual photographs, the so-called gripping geometry, so as to be able to later deduce by triangulation the position of the objects present in them. While traditional photogrammetry uses the GPS data of the images or known coordinate control points, while automatic photogrammetry is based on the automatic identification of well-recognized key points in three or more images, which will serve to create correspondences between the images and connect them together.
In this phase, we move from a set of points to a continuous surface, which makes up the actual 3D model. It starts from the points of the dense point cloud (but it would actually be possible also starting from the sparse point cloud), a mesh composed of a set of triangular polygons is generated, the vertices of which correspond to the points of the point cloud. The mesh is, therefore, a set of polygons each defined by three vertices described by three-dimensional coordinates x, y, z.
The mesh originally has no color, and this must, therefore, be assigned to the polygons. The color can be attributed to the polygons that make up the mesh in two different ways: through color-per-vertex or texture mapping.
Since the vertices of each polygon correspond to the points of the point cloud, the color of the latter can be transferred to the corresponding polygons. This method is commonly referred to as color-per-vertex.
At the end of the previous processing phases, the model is geometrically correct, but it is not in the right scale, this is because the software has no way of calculating the dimensions of the elements present in the images. The 3D model must, therefore, be brought to real dimensions if we want to be able to make precision measurements on it or to draw products from it, such as plants and elevations.
Scaling a model is quite simple and can be done knowing at least one reference distance within the reconstructed 3D scene. Indeed, by indicating to the software the actual measurement of an element that appears in the model, it is able to bring all the reconstructed elements to the right scale.
Feb 03, 2020