Examinando por Autor "Jurado, Juan M."
Mostrando 1 - 8 de 8
- Resultados por página
- Opciones de ordenación
Ítem A Machine Learning Model for Early Prediction of Crop Yield, Nested in a Web Application in the Cloud: A Case Study in an Olive Grove in Southern Spain(MDPI, 2022-08-31) Cubillas, Juan J.; Ramos, María I.; Jurado, Juan M.; Feito, Francisco R.Predictive systems are a crucial tool in management and decision-making in any productive sector. In the case of agriculture, it is especially interesting to have advance information on the profitability of a farm. In this sense, depending on the time of the year when this information is available, important decisions can be made that affect the economic balance of the farm. The aim of this study is to develop an effective model for predicting crop yields in advance that is accessible and easy to use by the farmer or farm manager from a web-based application. In this case, an olive orchard in the Andalusia region of southern Spain was used. The model was estimated using spatio-temporal training data, such as yield data from eight consecutive years, and more than twenty meteorological parameters data, automatically charged from public web services, belonging to a weather station located near the sample farm. The workflow requires selecting the parameters that influence the crop prediction and discarding those that introduce noise into the model. The main contribution of this research is the early prediction of crop yield with absolute errors better than 20%, which is crucial for making decisions on tillage investments and crop marketing.Ítem An efficient method for acquisition of spectral BRDFs in real-world scenarios(ELSEVIER, 2022-02) Jurado, Juan M.; Jiménez-Pérez, J. Roberto; Luís, Pádua; Feito, Francisco R.; Sousa, Joaquim J.Modelling of material appearance from reflectance measurements has become increasingly prevalent due to the development of novel methodologies in Computer Graphics. In the last few years, some advances have been made in measuring the light-material interactions, by employing goniometers/reflectometers under specific laboratory’s constraints. A wide range of applications benefit from data-driven appearance modelling techniques and material databases to create photorealistic scenarios and physically based simulations. However, important limitations arise from the current material scanning process, mostly related to the high diversity of existing materials in the real-world, the tedious process for material scanning and the spectral characterisation behaviour. Consequently, new approaches are required both for the automatic material acquisition process and for the generation of measured material databases. In this study, a novel approach for material appearance acquisition using hyperspectral data is proposed. A dense 3D point cloud filled with spectral data was generated from the images obtained by an unmanned aerial vehicle (UAV) equipped with an RGB camera and a hyperspectral sensor. The observed hyperspectral signatures were used to recognise natural and artificial materials in the 3D point cloud according to spectral similarity. Then, a parametrisation of Bidirectional Reflectance Distribution Function (BRDF) was carried out by sampling the BRDF space for each material. Consequently, each material is characterised by multiple samples with different incoming and outgoing angles. Finally, an analysis of BRDF sample completeness is performed considering four sunlight positions and 16x16 resolution for each material. The results demonstrated the capability of the used technology and the effectiveness of our method to be used in applications such as spectral rendering and real-word material acquisition and classification.Ítem An Efficient Method for Generating UAV-Based Hyperspectral Mosaics Using Push-Broom Sensors(IEEE, 2021-06) Jurado, Juan M.; Pádua, Luís; Hruska, Jonas; Feito, Francisco R.; Sousa, Joaquim J.Hyperspectral sensors mounted in unmanned aerial vehicles offer new opportunities to explore high-resolution multitemporal spectral analysis in remote sensing applications. Nevertheless, the use of hyperspectral data still poses challenges mainly in postprocessing to correct from high geometric deformation of images. In general, the acquisition of high-quality hyperspectral imagery is achieved through a time-consuming and complex processing workflow. However, this effort is mandatory when using hyperspectral imagery in a multisensor data fusion perspective, such as with thermal infrared imagery or photogrammetric point clouds. Push-broom hyperspectral sensors provide high spectral resolution data, but its scanning acquisition architecture imposes more challenges to create geometrically accurate mosaics from multiple hyperspectral swaths. In this article, an efficient method is presented to correct geometrical distortions on hyperspectral swaths from push-broom sensors by aligning them with an RGB photogrammetric orthophoto mosaic. The proposed method is based on an iterative approach to align hyperspectral swaths with an RGB photogrammetric orthophoto mosaic. Using as input preprocessed hyperspectral swaths, apart from the need of introducing some control points, the workflow is fully automatic and consists of: adaptive swath subdivision into multiple fragments; detection of significant image features; estimation of valid matches between individual swaths and the RGB orthophoto mosaic; and calculation of the best geometric transformation model to the retrieved matches. As a result, geometrical distortions of hyperspectral swaths are corrected and an orthomosaic is generated. This methodology provides an expedite solution able to produce a hyperspectral mosaic with an accuracy ranging from two to five times the ground sampling distance of the high-resolution RGB orthophoto mosaic, enabling the hyperspectral data integration with data from other sensors for multiple applications.Ítem An optimized approach for generating dense thermal point clouds from UAV-imagery(ELSEVIER, 2021-12) López, Alfonso; Jurado, Juan M.; Ogayar, Carlos J.; Feito, Francisco R.Thermal infrared (TIR) images acquired from Unmanned Aircraft Vehicles (UAV) are gaining scientific interest in a wide variety of fields. However, the reconstruction of three-dimensional (3D) point clouds utilizing consumer-grade TIR images presents multiple drawbacks as a consequence of low-resolution and induced aberrations. Consequently, these problems may lead photogrammetric techniques, such as Structure from Motion (SfM), to generate poor results. This work proposes the use of RGB point clouds estimated from SfM as the input for building thermal point clouds. For that purpose, RGB and thermal imagery are registered using the Enhanced Correlation Coefficient (ECC) algorithm after removing acquisition errors, thus allowing us to project TIR images into an RGB point cloud. Furthermore, we consider several methods to provide accurate thermal values for each 3D point. First, the occlusion problem is solved through two different approaches, so that points that are not visible from a viewing angle do not erroneously receive values from foreground objects. Then, we propose a flexible method to aggregate multiple thermal values considering the dispersion from such aggregation to the image samples. Therefore, it minimizes error measurements. A naive classification algorithm is then applied to the thermal point clouds as a case study for evaluating the temperature of vegetation and ground points. As a result, our approach builds thermal point clouds with up to 798,69% more point density than results from other commercial solutions. Moreover, it minimizes the build time by using parallel computing for time-consuming tasks. Despite obtaining larger point clouds, we report up to 96,73% less processing time per 3D point.Ítem An out-of-core method for GPU image mapping on large 3D scenarios of the real world(ELSEVIER, 2022-03) Jurado, Juan M.; Padrón, Emilio J.; Jiménez, J. Roberto; Ortega, LidiaImage mapping on 3D huge scenarios of the real world is one of the most fundamental and computational expensive processes for the integration of multi-source sensing data. Recent studies focused on the observation and characterization of Earth have been enhanced by the proliferation of Unmanned Aerial Vehicle (UAV) and sensors able to capture massive datasets with a high spatial resolution. Despite the advances in manufacturing new cameras and versatile platforms, only a few methods have been developed to characterize the study area by fusing heterogeneous data such as thermal, multispectral or hyperspectral images with high-resolution 3D models. The main reason for this lack of solutions is the challenge to integrate multi-scale datasets and high computational efforts required for image mapping on dense and complex geometric models. In this paper, we propose an efficient pipeline for multi-source image mapping on huge 3D scenarios. Our GPU-based solution significantly reduces the run time and allows us to generate enriched 3D models on-site. The proposed method is out-of-core and it uses available resources of the GPU’s machine to perform two main tasks: (i) image mapping and (ii) occlusion testing. We deploy highly-optimized GPU-kernels for image mapping and detection of self-hidden geometry in the 3D model, as well as a GPU-based parallelization to manage the 3D model considering several spatial partitions according to the GPU capabilities. Our method has been tested on 3D scenarios with different point cloud densities (66M, 271M, 542M) and two sets of multispectral images collected by two drone flights. We focus on launching the proposed method on three platforms: (i) System on a Chip (SoC), (ii) a user-grade laptop and (iii) a PC. The results demonstrate the method’s capabilities in terms of performance and versatility to be computed by commodity hardware. Thus, taking advantage of GPUs, this method opens the door for embedded and edge computing devices for 3D image mapping on large-scale scenarios in near real-timeÍtem Multispectral mapping on 3D models and multi-temporal monitoring for individual characterization of olive trees(MDPI, 2020-02) Jurado, Juan M.; Ortega, Lidia; Cubillas, Juan J.; Feito, Francisco R.3D plant structure observation and characterization to get a comprehensive knowledge about the plant status still poses a challenge in Precision Agriculture (PA). The complex branching and self-hidden geometry in the plant canopy are some of the existing problems for the 3D reconstruction of vegetation. In this paper, we propose a novel application for the fusion of multispectral images and high-resolution point clouds of an olive orchard. Our methodology is based on a multi-temporal approach to study the evolution of olive trees. This process is fully automated and no human intervention is required to characterize the point cloud with the reflectance captured by multiple multispectral images. The main objective of this work is twofold: (1) the multispectral image mapping on a high-resolution point cloud and (2) the multi-temporal analysis of morphological and spectral traits in two flight campaigns. Initially, the study area is modeled by taking multiple overlapping RGB images with a high-resolution camera from an unmanned aerial vehicle (UAV). In addition, a UAV-based multispectral sensor is used to capture the reflectance for some narrow-bands (green, near-infrared, red, and red-edge). Then, the RGB point cloud with a high detailed geometry of olive trees is enriched by mapping the reflectance maps, which are generated for every multispectral image. Therefore, each 3D point is related to its corresponding pixel of the multispectral image, in which it is visible. As a result, the 3D models of olive trees are characterized by the observed reflectance in the plant canopy. These reflectance values are also combined to calculate several vegetation indices (NDVI, RVI, GRVI, and NDRE). According to the spectral and spatial relationships in the olive plantation, segmentation of individual olive trees is performed. On the one hand, plant morphology is studied by a voxel-based decomposition of its 3D structure to estimate the height and volume. On the other hand, the plant health is studied by the detection of meaningful spectral traits of olive trees. Moreover, the proposed methodology also allows the processing of multi-temporal data to study the variability of the studied features. Consequently, some relevant changes are detected and the development of each olive tree is analyzed by a visual-based and statistical approach. The interactive visualization and analysis of the enriched 3D plant structure with different spectral layers is an innovative method to inspect the plant health and ensure adequate plantation sustainability.Ítem Remote sensing image fusion on 3D scenarios: A review of applications for agriculture and forestry(2022-08) Jurado, Juan M.; López, Alfonso; Pádua, Luís; Sousa, Joaquim J.Three-dimensional (3D) image mapping of real-world scenarios has a great potential to provide the user with a more accurate scene understanding. This will enable, among others, unsupervised automatic sampling of meaningful material classes from the target area for adaptive semi-supervised deep learning techniques. This path is already being taken by the recent and fast-developing research in computational fields, however, some issues related to computationally expensive processes in the integration of multi-source sensing data remain. Recent studies focused on Earth observation and characterization are enhanced by the proliferation of Unmanned Aerial Vehicles (UAV) and sensors able to capture massive datasets with a high spatial resolution. In this scope, many approaches have been presented for 3D modeling, remote sensing, image processing and mapping, and multi-source data fusion. This survey aims to present a summary of previous work according to the most relevant contributions for the reconstruction and analysis of 3D models of real scenarios using multispectral, thermal and hyperspectral imagery. Surveyed applications are focused on agriculture and forestry since these fields concentrate most applications and are widely studied. Many challenges are currently being overcome by recent methods based on the reconstruction of multi-sensorial 3D scenarios. In parallel, the processing of large image datasets has recently been accelerated by General-Purpose Graphics Processing Unit (GPGPU) approaches that are also summarized in this work. Finally, as a conclusion, some open issues and future research directions are presented.Ítem Web-based GIS application for real-time interaction of underground infrastructure through virtual reality(ACM, 2017-11) Jurado, Juan M.; Graciano, Alejandro; Ortega, Lidia; Feito, Francisco R.Real-time visualization in web-based system remains challenging due to the amount of information associated to a 3D urban models. However, these 3D models are not able to provide advanced management of urban infrastructures, such as underground facilities. Nowadays, 3D GIS is considered the appropriate tool to provide accurate analysis and decision support based on spatial data. This paper presents a web-GIS application for 3D visualization, navigation, interaction and analysis of underground infrastructures through virtual reality. The growth of underground cities is a complex problem without easy solutions. In general, these infrastructures cannot be directly visualized. Thus, subsoil mapping can help us to develop a clearer representation of underground's pipes, cables or water mains. In addition, the approach of virtual reality provides an immersive experience and novelty interaction to acquire a complete knowledge about underground city structures. Experimental results show an integral application for the efficient management of underground infrastructure in real-time.