Departamento de Ingeniería Cartográfica, Geodésica y Fotogrametría
URI permanente para esta comunidadhttps://hdl.handle.net/10953/36
En esta Comunidad se recogen los documentos generados por el Departamento de Ingeniería Cartográfica, Geodésica y Fotogrametría y que cumplen los requisitos de Copyright para su difusión en acceso abierto.
Examinar
Examinando Departamento de Ingeniería Cartográfica, Geodésica y Fotogrametría por Autor "Ariza-López, Francisco Javier"
Mostrando 1 - 14 de 14
- Resultados por página
- Opciones de ordenación
Ítem A method for checking the quality of geographic metadata based on ISO 19157(Taylor & Francis, 2018) Ureña, Manuel Antonio; Nogueras-Iso, Javier; Lacasta, Javier; Ariza-López, Francisco JavierWith recent advances in remote sensing, location-based services and other related technologies, the production of geospatial information has exponentially increased in the last decades. Furthermore, to facilitate discovery and efficient access to such information, spatial data infrastructures were promoted and standardized, with a consideration that metadata are essential to describing data and services. Standardization bodies such as the International Organization for Standardization have defined well-known metadata models such as ISO 19115. However, current metadata assets exhibit heterogeneous quality levels because they are created by different producers with different perspectives. To address quality-related concerns, several initiatives attempted to define a common framework and test the suitability of metadata through automatic controls. Nevertheless, these controls are focused on interoperability by testing the format of metadata and a set of controlled elements. In this paper, we propose a methodology of testing the quality of metadata by considering aspects other than interoperability. The proposal adapts ISO 19157 to the metadata case and has been applied to a corpus of the Spanish Spatial Data Infrastructure. The results demonstrate that our quality check helps determine different types of errors for all metadata elements and can be almost completely automated to enhance the significance of metadata.Ítem Accuracy Assessment of Digital Elevation Models (DEMs): A Critical Review of Practices of the Past Three Decades(MDPI, 2020) Mesa-Mingorance, José Luis; Ariza-López, Francisco JavierAn analysis of almost 200 references has been carried out in order to obtain knowledge about the DEM (Digital Elevation Model) accuracy assessment methods applied in the last three decades. With regard to grid DEMs, 14 aspects related to the accuracy assessment processes have been analysed (DEM data source, data model, reference source for the evaluation, extension of the evaluation, applied models, etc.). In the references analysed, except in rare cases where an accuracy assessment standard has been followed, accuracy criteria and methods are usually established according to the premises established by the authors. Visual analyses and 3D analyses are few in number. The great majority of cases assess accuracy by means of point-type control elements, with the use of linear and surface elements very rare. Most cases still consider the normal model for errors (discrepancies), but analysis based on the data itself is making headway. Sample size and clear criteria for segmentation are still open issues. Almost 21% of cases analyse the accuracy in some derived parameter(s) or output, but no standardization exists for this purpose. Thus, there has been an improvement in accuracy assessment methods, but there are still many aspects that require the attention of researchers and professional associations or standardization bodies such as a common vocabulary, standardized assessment methods, methods for meta-quality assessment, and indices with an applied quality perspective, among others.Ítem An Analysis of Existing Production Frameworks for Statistical and Geographic Information: Synergies, Gaps and Integration(MDPI, 2021) Ariza-López, Francisco Javier; Rodríguez-Pascual, Antonio; López-Pellicer, Francisco Javier; Vilches-Blázquez, Luis Manuel; Villar-Iglesias, Agustín; Masó, Joan; Díaz-Díaz, Efrén; Ureña, Manuel Antonio; González-Yanes, AlbertoThe production of official statistical and geospatial data is often in the hands of highly specialized public agencies that have traditionally followed their own paths and established their own production frameworks. In this article, we present the main frameworks of these two areas and focus on the possibility and need to achieve a better integration between them through the interoperability of systems, processes, and data. The statistical area is well led and has well-defined frameworks. The geospatial area does not have clear leadership and the large number of standards establish a framework that is not always obvious. On the other hand, the lack of a general and common legal framework is also highlighted. Additionally, three examples are offered: the first is the application of the spatial data quality model to the case of statistical data, the second of the application of the statistical process model to the geospatial case, and the third is the use of linked geospatial and statistical data. These examples demonstrate the possibility of transferring experiences/advances from one area to another. In this way, we emphasize the conceptual proximity of these two areas, highlighting synergies, gaps, and potential integration.Ítem Dataset of three-dimensional traces of roads(Nature Research, 2019) Ariza-López, Francisco Javier; Mozas, Antonio Tomás; Ureña, Manuel Antonio; Gil-de-la-Vega, PaulaWe present a dataset consisting of three-dimensional traces, captured by Global Navigation Satellite System techniques with three-dimensional coordinates. It offers 138 traces (69 going and 69 returning), in addition to the actual mean axis of the road determined by precise surveying techniques to be used as ground truth for research activities. These data may serve as a test bed for research on data mining applications related to Global Navigation Satellite System multitraces, particularly the development and testing of algorithms intended for mining mean axis data from road multitraces. The data are suitable for the statistical analysis of both single-trace and multitrace datasets (e.g., outliers and biases).Ítem Deep learning methods applied to digital elevation models: state of the art(Taylor&Francis, 2023-09-06) Ruiz-Lendínez, Juan José; Ariza-López, Francisco Javier; Reinoso, Juan Francisco; Ureña, Manuel Antonio; Quesada-Real, Francisco JoséDeep Learning (DL) has a wide variety of applications in various thematic domains, including spatial information. Although with limitations, it is also starting to be considered in operations related to Digital Elevation Models (DEMs). This study aims to review the methods of DL applied in the field of altimetric spatial information in general, and DEMs in particular. Void Filling (VF), Super-Resolution (SR), landform classification and hydrography extraction are just some of the operations where traditional methods are being replaced by DL methods. Our review concludes that although these methods have great potential, there are aspects that need to be improved. More appropriate terrain information or algorithm parameterisation are some of the challenges that this methodology still needs to face.Ítem Deep learning methods applied to digital elevation models: state of the art(Taylor & Francis, 2023) Ruiz-Lendínez, Juan José; Ariza-López, Francisco Javier; Reinoso, Juan Francisco; Ureña, Manuel Antonio; Quesada-Real, Francisco JoséDeep Learning (DL) has a wide variety of applications in various thematic domains, including spatial information. Although with limitations, it is also starting to be considered in operations related to Digital Elevation Models (DEMs). This study aims to review the methods of DL applied in the field of altimetric spatial information in general, and DEMs in particular. Void Filling (VF), Super-Resolution (SR), landform classification and hydrography extraction are just some of the operations where traditional methods are being replaced by DL methods. Our review concludes that although these methods have great potential, there are aspects that need to be improved. More appropriate terrain information or algorithm parameterisation are some of the challenges that this methodology still needs to face.Ítem Expert Knowledge as Basis for Assessing an Automatic Matching Procedure(MDPI, 2021) Ruiz-Lendínez, Juan José; Ariza-López, Francisco Javier; Ureña, Manuel AntonioThe continuous development of machine learning procedures and the development of new ways of mapping based on the integration of spatial data from heterogeneous sources have resulted in the automation of many processes associated with cartographic production such as positional accuracy assessment (PAA). The automation of the PAA of spatial data is based on automated matching procedures between corresponding spatial objects (usually building polygons) from two geospatial databases (GDB), which in turn are related to the quantification of the similarity between these objects. Therefore, assessing the capabilities of these automated matching procedures is key to making automation a fully operational solution in PAA processes. The present study has been developed in response to the need to explore the scope of these capabilities by means of a comparison with human capabilities. Thus, using a genetic algorithm (GA) and a group of human experts, two experiments have been carried out: (i) to compare the similarity values between building polygons assigned by both and (ii) to compare the matching procedure developed in both cases. The results obtained showed that the GA—experts agreement was very high, with a mean agreement percentage of 93.3% (for the experiment 1) and 98.8% (for the experiment 2). These results confirm the capability of the machine-based procedures, and specifically of GAs, to carry out matching tasks.Ítem Expert Knowledge as Basis for Assessing an Automatic Matching Procedure(MDPI, 2021-05-02) Ruiz-Lendínez, Juan José; Ariza-López, Francisco Javier; Ureña, Manuel AntonioThe continuous development of machine learning procedures and the development of new ways of mapping based on the integration of spatial data from heterogeneous sources have resulted in the automation of many processes associated with cartographic production such as positional accuracy assessment (PAA). The automation of the PAA of spatial data is based on automated matching procedures between corresponding spatial objects (usually building polygons) from two geospatial databases (GDB), which in turn are related to the quantification of the similarity between these objects. Therefore, assessing the capabilities of these automated matching procedures is key to making automation a fully operational solution in PAA processes. The present study has been developed in response to the need to explore the scope of these capabilities by means of a comparison with human capabilities. Thus, using a genetic algorithm (GA) and a group of human experts, two experiments have been carried out: (i) to compare the similarity values between building polygons assigned by both and (ii) to compare the matching procedure developed in both cases. The results obtained showed that the GA-experts agreement was very high, with a mean agreement percentage of 93.3% (for the experiment 1) and 98.8% (for the experiment 2). These results confirm the capability of the machine-based procedures, and specifically of GAs, to carry out matching tasks.Ítem Influence of Sample Size on Automatic Positional Accuracy Assessment Methods for Urban Areas(MDPI, 2018) Ariza-López, Francisco Javier; Ruiz-Lendínez, Juan José; Ureña, Manuel AntonioIn recent years, new approaches aimed to increase the automation level of positional accuracy assessment processes for spatial data have been developed. However, in such cases, an aspect as significant as sample size has not yet been addressed. In this paper, we study the influence of sample size when estimating the planimetric positional accuracy of urban databases by means of an automatic assessment using polygon-based methodology. Our study is based on a simulation process, which extracts pairs of homologous polygons from the assessed and reference data sources and applies two buffer-based methods. The parameter used for determining the different sizes (which range from 5 km up to 100 km) has been the length of the polygons’ perimeter, and for each sample size 1000 simulations were run. After completing the simulation process, the comparisons between the estimated distribution functions for each sample and population distribution function were carried out by means of the Kolmogorov–Smirnov test. Results show a significant reduction in the variability of estimations when sample size increased from 5 km to 100 km.Ítem Quality of Metadata in Open Data Portals(IEEE, 2021) Nogueras-Iso, Javier; Lacasta, Javier; Ureña, Manuel Antonio; Ariza-López, Francisco JavierDuring the last decade, numerous governmental, educational or cultural institutions have launched Open Data initiatives that have facilitated the access to large volumes of datasets on the web. The main way to disseminate this availability of data has been the deployment of Open Data catalogs exposing metadata of these datasets, which are easily indexed by web search engines. Open Source platforms have facilitated enormously the labor of institutions involved in Open Data initiatives, making the setup of Open Data portals almost a trivial task. However, few approaches have analyzed how precisely metadata describes the associated datasets. Taking into account the existing approaches for analyzing the quality of metadata in the Open Data context and other related domains, this work contributes to the state of the art by extending an ISO 19157 based method for checking the quality of geographic metadata to the context of Open Data metadata. Focusing on metadata models compliant with the Data Catalog Vocabulary proposed by W3C, the proposed extended method has been applied for the evaluation of the Open Data catalog of the Spanish Government. The results have been also compared with those obtained by the Metadata Quality Assessment methodology proposed at the European Data Portal.Ítem Quality specification and control of a point cloud from a TLS survey using ISO 19157 standard(Elsevier, 2022) Ariza-López, Francisco Javier; Reinoso, Juan Francisco; García-Balboa, José Luis; Ariza-López, IñigoThis paper presents an application of the ISO 19157 framework to the case of a point cloud (PC) representing a heritage asset whose purpose is to serve specific use cases that could be managed in a building information modeling (BIM) environment. The main contribution of this study is to clarify the relationships between the different parts of the ISO 19157 framework applied to heritage building information modeling (HBIM) products derived from terrestrial laser scanner (TLS) surveys by means of a running example. This paper presents a proposal to evaluate, control and report on the quality of the TLS survey of the Ariza Bridge (a 16th century construction). In order to achieve this objective the data quality specifications that must be met are defined by describing and identifying the requirements of five use cases of the data product: 3D visualization, location transfer, measurement, plane generation and absolute positioning. The specifications, according to ISO 19157, are formalized by selecting the data quality element to be measured, its scope, the measure used and the level of conformity necessary for the element to be accepted. In addition, the control methods for each quality element are proposed.Ítem Study of NSSDA Variability by Means of Automatic Positional Accuracy Assessment Methods(MDPI, 2019) Ruiz-Lendínez, Juan José; Ariza-López, Francisco Javier; Ureña, Manuel AntonioPoint-based standard methodologies (PBSM) suggest using ‘at least 20’ check points in order to assess the positional accuracy of a certain spatial dataset. However, the reason for decreasing the number of checkpoints to 20 is not elaborated upon in the original documents provided by the mapping agencies which develop these methodologies. By means of theoretical analysis and experimental tests, several authors and studies have demonstrated that this limited number of points is clearly insufficient. Using the point-based methodology for the automatic positional accuracy assessment of spatial data developed in our previous study Ruiz-Lendínez, et al (2017) and specifically, a subset of check points obtained from the application of this methodology to two urban spatial datasets, the variability of National Standard for Spatial Data Accuracy (NSSDA) estimations has been analyzed according to sample size. The results show that the variability of NSSDA estimations decreases when the number of check points increases, and also that these estimations have a tendency to underestimate accuracy. Finally, the graphical representation of the results can be employed in order to give some guidance on the recommended sample size when PBSMs are used.Ítem Study of NSSDA Variability by Means of Automatic Positional Accuracy Assessment Methods(MDPI, 2019-12-02) Ruiz-Lendínez, Juan José; Ariza-López, Francisco Javier; Ureña, Manuel AntonioPoint-based standard methodologies (PBSM) suggest using ‘at least 20’ check points in order to assess the positional accuracy of a certain spatial dataset. However, the reason for decreasing the number of checkpoints to 20 is not elaborated upon in the original documents provided by the mapping agencies which develop these methodologies. By means of theoretical analysis and experimental tests, several authors and studies have demonstrated that this limited number of points is clearly insu cient. Using the point-based methodology for the automatic positional accuracy assessment of spatial data developed in our previous study Ruiz-Lendínez, et al (2017) and specifically, a subset of check points obtained from the application of this methodology to two urban spatial datasets, the variability of National Standard for Spatial Data Accuracy (NSSDA) estimations has been analyzed according to sample size. The results show that the variability of NSSDA estimations decreases when the number of check points increases, and also that these estimations have a tendency to underestimate accuracy. Finally, the graphical representation of the results can be employed in order to give some guidance on the recommended sample size when PBSMs are used.Ítem Thematic Accuracy Quality Control by Means of a Set of Multinomials(MDPI, 2019) Ariza-López, Francisco Javier; Rodríguez-Avi, José; Alba-Fernández, María Virtudes; García-Balboa, José LuisThe error matrix has been adopted as both the “de facto” and the “de jure” standard way to report on the thematic accuracy assessment of any remotely sensed data product. This perspective assumes that the error matrix can be considered as a set of values following a unique multinomial distribution. However, the assumption of the underlying statistical model falls down when true reference data are available for quality control. To overcome this problem, a new method for thematic accuracy quality control is proposed, which uses a multinomial approach for each category and is called QCCS (quality control column set). The main advantage is that it allows us to state a set of quality specifications for each class and to test if they are fulfilled. These requirements can be related to the percentage of correctness in the classification for a particular class but also to the percentage of possible misclassifications or confusions between classes. In order to test whether such specifications are achieved or not, an exact multinomial test is proposed for each category. Furthermore, if a global hypothesis test is desired, the Bonferroni correction is proposed. All these new approaches allow a more flexible way of understanding and testing thematic accuracy quality control compared with the classical methods based on the confusion matrix. For a better understanding, a practical example of an application is included for classification with four categories.